00:00:00.001 Started by upstream project "autotest-per-patch" build number 132391 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.181 Fetching changes from the remote Git repository 00:00:00.183 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.226 Using shallow fetch with depth 1 00:00:00.226 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.226 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.296 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.296 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.699 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.715 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.728 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.728 > git config core.sparsecheckout # timeout=10 00:00:06.739 > git read-tree -mu HEAD # timeout=10 00:00:06.755 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.776 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.776 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.868 [Pipeline] Start of Pipeline 00:00:06.881 [Pipeline] library 00:00:06.883 Loading library shm_lib@master 00:00:06.883 Library shm_lib@master is cached. Copying from home. 00:00:06.897 [Pipeline] node 00:00:06.917 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.919 [Pipeline] { 00:00:06.929 [Pipeline] catchError 00:00:06.930 [Pipeline] { 00:00:06.942 [Pipeline] wrap 00:00:06.948 [Pipeline] { 00:00:06.956 [Pipeline] stage 00:00:06.958 [Pipeline] { (Prologue) 00:00:06.975 [Pipeline] echo 00:00:06.976 Node: VM-host-SM16 00:00:06.982 [Pipeline] cleanWs 00:00:06.991 [WS-CLEANUP] Deleting project workspace... 00:00:06.991 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.998 [WS-CLEANUP] done 00:00:07.185 [Pipeline] setCustomBuildProperty 00:00:07.254 [Pipeline] httpRequest 00:00:07.558 [Pipeline] echo 00:00:07.559 Sorcerer 10.211.164.20 is alive 00:00:07.571 [Pipeline] retry 00:00:07.574 [Pipeline] { 00:00:07.587 [Pipeline] httpRequest 00:00:07.590 HttpMethod: GET 00:00:07.591 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.591 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.603 Response Code: HTTP/1.1 200 OK 00:00:07.603 Success: Status code 200 is in the accepted range: 200,404 00:00:07.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.385 [Pipeline] } 00:00:09.405 [Pipeline] // retry 00:00:09.413 [Pipeline] sh 00:00:09.692 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.706 [Pipeline] httpRequest 00:00:10.029 [Pipeline] echo 00:00:10.030 Sorcerer 10.211.164.20 is alive 00:00:10.039 [Pipeline] retry 00:00:10.042 [Pipeline] { 00:00:10.055 [Pipeline] httpRequest 00:00:10.060 HttpMethod: GET 00:00:10.061 URL: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:10.061 Sending request to url: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:10.078 Response Code: HTTP/1.1 200 OK 00:00:10.078 Success: Status code 200 is in the accepted range: 200,404 00:00:10.079 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:01:32.384 [Pipeline] } 00:01:32.402 [Pipeline] // retry 00:01:32.410 [Pipeline] sh 00:01:32.691 + tar --no-same-owner -xf spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:01:35.988 [Pipeline] sh 00:01:36.267 + git -C spdk log --oneline -n5 00:01:36.267 d2ebd983e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:36.267 fa4f4fd15 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:36.267 b1f0bbae7 nvmf: Expose DIF type of namespace to host again 00:01:36.267 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:36.267 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:36.288 [Pipeline] writeFile 00:01:36.305 [Pipeline] sh 00:01:36.586 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.598 [Pipeline] sh 00:01:36.880 + cat autorun-spdk.conf 00:01:36.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.880 SPDK_TEST_NVMF=1 00:01:36.880 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.880 SPDK_TEST_URING=1 00:01:36.880 SPDK_TEST_USDT=1 00:01:36.880 SPDK_RUN_UBSAN=1 00:01:36.880 NET_TYPE=virt 00:01:36.880 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.887 RUN_NIGHTLY=0 00:01:36.889 [Pipeline] } 00:01:36.903 [Pipeline] // stage 00:01:36.917 [Pipeline] stage 00:01:36.920 [Pipeline] { (Run VM) 00:01:36.932 [Pipeline] sh 00:01:37.213 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.213 + echo 'Start stage prepare_nvme.sh' 00:01:37.213 Start stage prepare_nvme.sh 00:01:37.213 + [[ -n 3 ]] 00:01:37.213 + disk_prefix=ex3 00:01:37.213 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:37.213 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:37.213 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:37.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.213 ++ SPDK_TEST_NVMF=1 00:01:37.213 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.213 ++ SPDK_TEST_URING=1 00:01:37.213 ++ SPDK_TEST_USDT=1 00:01:37.213 ++ SPDK_RUN_UBSAN=1 00:01:37.213 ++ NET_TYPE=virt 00:01:37.213 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.213 ++ RUN_NIGHTLY=0 00:01:37.213 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.213 + nvme_files=() 00:01:37.213 + declare -A nvme_files 00:01:37.213 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.213 + nvme_files['nvme.img']=5G 00:01:37.213 + nvme_files['nvme-cmb.img']=5G 00:01:37.214 + nvme_files['nvme-multi0.img']=4G 00:01:37.214 + nvme_files['nvme-multi1.img']=4G 00:01:37.214 + nvme_files['nvme-multi2.img']=4G 00:01:37.214 + nvme_files['nvme-openstack.img']=8G 00:01:37.214 + nvme_files['nvme-zns.img']=5G 00:01:37.214 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.214 + (( SPDK_TEST_FTL == 1 )) 00:01:37.214 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.214 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.214 + for nvme in "${!nvme_files[@]}" 00:01:37.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:37.214 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.214 + for nvme in "${!nvme_files[@]}" 00:01:37.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:37.780 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.780 + for nvme in "${!nvme_files[@]}" 00:01:37.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:37.780 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.780 + for nvme in "${!nvme_files[@]}" 00:01:37.780 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:38.040 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.040 + for nvme in "${!nvme_files[@]}" 00:01:38.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:38.040 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.040 + for nvme in "${!nvme_files[@]}" 00:01:38.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:38.040 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.040 + for nvme in "${!nvme_files[@]}" 00:01:38.040 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:38.606 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.606 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:38.606 + echo 'End stage prepare_nvme.sh' 00:01:38.606 End stage prepare_nvme.sh 00:01:38.616 [Pipeline] sh 00:01:38.896 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:38.896 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:38.896 00:01:38.896 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:38.896 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:38.896 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:38.896 HELP=0 00:01:38.896 DRY_RUN=0 00:01:38.896 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:38.896 NVME_DISKS_TYPE=nvme,nvme, 00:01:38.896 NVME_AUTO_CREATE=0 00:01:38.896 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:38.896 NVME_CMB=,, 00:01:38.896 NVME_PMR=,, 00:01:38.896 NVME_ZNS=,, 00:01:38.896 NVME_MS=,, 00:01:38.896 NVME_FDP=,, 00:01:38.896 SPDK_VAGRANT_DISTRO=fedora39 00:01:38.896 SPDK_VAGRANT_VMCPU=10 00:01:38.896 SPDK_VAGRANT_VMRAM=12288 00:01:38.896 SPDK_VAGRANT_PROVIDER=libvirt 00:01:38.896 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:38.896 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:38.896 SPDK_OPENSTACK_NETWORK=0 00:01:38.896 VAGRANT_PACKAGE_BOX=0 00:01:38.896 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:38.896 FORCE_DISTRO=true 00:01:38.896 VAGRANT_BOX_VERSION= 00:01:38.896 EXTRA_VAGRANTFILES= 00:01:38.896 NIC_MODEL=e1000 00:01:38.896 00:01:38.896 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:38.896 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:42.189 Bringing machine 'default' up with 'libvirt' provider... 00:01:42.447 ==> default: Creating image (snapshot of base box volume). 00:01:42.707 ==> default: Creating domain with the following settings... 00:01:42.707 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732108854_628a9d852e0292e0440b 00:01:42.707 ==> default: -- Domain type: kvm 00:01:42.707 ==> default: -- Cpus: 10 00:01:42.707 ==> default: -- Feature: acpi 00:01:42.707 ==> default: -- Feature: apic 00:01:42.707 ==> default: -- Feature: pae 00:01:42.707 ==> default: -- Memory: 12288M 00:01:42.707 ==> default: -- Memory Backing: hugepages: 00:01:42.707 ==> default: -- Management MAC: 00:01:42.707 ==> default: -- Loader: 00:01:42.707 ==> default: -- Nvram: 00:01:42.707 ==> default: -- Base box: spdk/fedora39 00:01:42.707 ==> default: -- Storage pool: default 00:01:42.707 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732108854_628a9d852e0292e0440b.img (20G) 00:01:42.707 ==> default: -- Volume Cache: default 00:01:42.707 ==> default: -- Kernel: 00:01:42.707 ==> default: -- Initrd: 00:01:42.707 ==> default: -- Graphics Type: vnc 00:01:42.707 ==> default: -- Graphics Port: -1 00:01:42.707 ==> default: -- Graphics IP: 127.0.0.1 00:01:42.707 ==> default: -- Graphics Password: Not defined 00:01:42.707 ==> default: -- Video Type: cirrus 00:01:42.707 ==> default: -- Video VRAM: 9216 00:01:42.707 ==> default: -- Sound Type: 00:01:42.707 ==> default: -- Keymap: en-us 00:01:42.707 ==> default: -- TPM Path: 00:01:42.707 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:42.707 ==> default: -- Command line args: 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:42.707 ==> default: -> value=-drive, 00:01:42.707 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:42.707 ==> default: -> value=-drive, 00:01:42.707 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.707 ==> default: -> value=-drive, 00:01:42.707 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.707 ==> default: -> value=-drive, 00:01:42.707 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:42.707 ==> default: -> value=-device, 00:01:42.707 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.707 ==> default: Creating shared folders metadata... 00:01:42.707 ==> default: Starting domain. 00:01:44.613 ==> default: Waiting for domain to get an IP address... 00:01:59.497 ==> default: Waiting for SSH to become available... 00:02:00.874 ==> default: Configuring and enabling network interfaces... 00:02:06.145 default: SSH address: 192.168.121.122:22 00:02:06.145 default: SSH username: vagrant 00:02:06.145 default: SSH auth method: private key 00:02:08.049 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.165 ==> default: Mounting SSHFS shared folder... 00:02:17.540 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:17.540 ==> default: Checking Mount.. 00:02:18.916 ==> default: Folder Successfully Mounted! 00:02:18.916 ==> default: Running provisioner: file... 00:02:19.484 default: ~/.gitconfig => .gitconfig 00:02:20.051 00:02:20.051 SUCCESS! 00:02:20.051 00:02:20.051 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.051 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.051 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.051 00:02:20.060 [Pipeline] } 00:02:20.076 [Pipeline] // stage 00:02:20.085 [Pipeline] dir 00:02:20.086 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:20.088 [Pipeline] { 00:02:20.101 [Pipeline] catchError 00:02:20.103 [Pipeline] { 00:02:20.115 [Pipeline] sh 00:02:20.451 + vagrant ssh-config --host vagrant 00:02:20.451 + sed -ne /^Host/,$p 00:02:20.451 + tee ssh_conf 00:02:24.639 Host vagrant 00:02:24.639 HostName 192.168.121.122 00:02:24.639 User vagrant 00:02:24.639 Port 22 00:02:24.639 UserKnownHostsFile /dev/null 00:02:24.639 StrictHostKeyChecking no 00:02:24.639 PasswordAuthentication no 00:02:24.639 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:24.639 IdentitiesOnly yes 00:02:24.639 LogLevel FATAL 00:02:24.639 ForwardAgent yes 00:02:24.639 ForwardX11 yes 00:02:24.639 00:02:24.651 [Pipeline] withEnv 00:02:24.653 [Pipeline] { 00:02:24.665 [Pipeline] sh 00:02:24.942 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:24.942 source /etc/os-release 00:02:24.942 [[ -e /image.version ]] && img=$(< /image.version) 00:02:24.942 # Minimal, systemd-like check. 00:02:24.942 if [[ -e /.dockerenv ]]; then 00:02:24.942 # Clear garbage from the node's name: 00:02:24.942 # agt-er_autotest_547-896 -> autotest_547-896 00:02:24.942 # $HOSTNAME is the actual container id 00:02:24.942 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:24.942 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:24.942 # We can assume this is a mount from a host where container is running, 00:02:24.942 # so fetch its hostname to easily identify the target swarm worker. 00:02:24.942 container="$(< /etc/hostname) ($agent)" 00:02:24.942 else 00:02:24.942 # Fallback 00:02:24.942 container=$agent 00:02:24.942 fi 00:02:24.942 fi 00:02:24.942 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:24.942 00:02:25.210 [Pipeline] } 00:02:25.229 [Pipeline] // withEnv 00:02:25.239 [Pipeline] setCustomBuildProperty 00:02:25.256 [Pipeline] stage 00:02:25.259 [Pipeline] { (Tests) 00:02:25.279 [Pipeline] sh 00:02:25.557 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.570 [Pipeline] sh 00:02:25.849 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.124 [Pipeline] timeout 00:02:26.124 Timeout set to expire in 1 hr 0 min 00:02:26.126 [Pipeline] { 00:02:26.140 [Pipeline] sh 00:02:26.420 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:26.987 HEAD is now at d2ebd983e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:27.000 [Pipeline] sh 00:02:27.281 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.549 [Pipeline] sh 00:02:27.824 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:27.842 [Pipeline] sh 00:02:28.123 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:28.123 ++ readlink -f spdk_repo 00:02:28.123 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.123 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.123 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.123 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.123 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.123 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.123 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.123 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:28.123 + cd /home/vagrant/spdk_repo 00:02:28.123 + source /etc/os-release 00:02:28.123 ++ NAME='Fedora Linux' 00:02:28.123 ++ VERSION='39 (Cloud Edition)' 00:02:28.123 ++ ID=fedora 00:02:28.123 ++ VERSION_ID=39 00:02:28.123 ++ VERSION_CODENAME= 00:02:28.123 ++ PLATFORM_ID=platform:f39 00:02:28.123 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:28.123 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.123 ++ LOGO=fedora-logo-icon 00:02:28.123 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:28.123 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.123 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:28.123 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.123 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.123 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.123 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:28.123 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.123 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:28.123 ++ SUPPORT_END=2024-11-12 00:02:28.123 ++ VARIANT='Cloud Edition' 00:02:28.123 ++ VARIANT_ID=cloud 00:02:28.123 + uname -a 00:02:28.123 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:28.123 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.691 Hugepages 00:02:28.691 node hugesize free / total 00:02:28.691 node0 1048576kB 0 / 0 00:02:28.691 node0 2048kB 0 / 0 00:02:28.691 00:02:28.691 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.691 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.691 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.950 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:28.950 + rm -f /tmp/spdk-ld-path 00:02:28.950 + source autorun-spdk.conf 00:02:28.950 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.950 ++ SPDK_TEST_NVMF=1 00:02:28.950 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.950 ++ SPDK_TEST_URING=1 00:02:28.950 ++ SPDK_TEST_USDT=1 00:02:28.950 ++ SPDK_RUN_UBSAN=1 00:02:28.950 ++ NET_TYPE=virt 00:02:28.950 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.950 ++ RUN_NIGHTLY=0 00:02:28.950 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.950 + [[ -n '' ]] 00:02:28.950 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:28.950 + for M in /var/spdk/build-*-manifest.txt 00:02:28.950 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:28.950 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.950 + for M in /var/spdk/build-*-manifest.txt 00:02:28.950 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:28.950 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.950 + for M in /var/spdk/build-*-manifest.txt 00:02:28.950 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:28.950 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.950 ++ uname 00:02:28.950 + [[ Linux == \L\i\n\u\x ]] 00:02:28.950 + sudo dmesg -T 00:02:28.950 + sudo dmesg --clear 00:02:28.950 + dmesg_pid=5366 00:02:28.950 + [[ Fedora Linux == FreeBSD ]] 00:02:28.950 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.950 + sudo dmesg -Tw 00:02:28.950 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.950 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:28.950 + [[ -x /usr/src/fio-static/fio ]] 00:02:28.950 + export FIO_BIN=/usr/src/fio-static/fio 00:02:28.950 + FIO_BIN=/usr/src/fio-static/fio 00:02:28.950 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:28.950 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:28.950 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:28.950 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.950 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.950 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:28.950 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.950 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.950 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.950 13:21:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:28.950 13:21:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.950 13:21:40 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:28.950 13:21:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:28.950 13:21:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.210 13:21:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:29.210 13:21:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.210 13:21:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:29.210 13:21:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.210 13:21:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.210 13:21:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.210 13:21:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.210 13:21:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.210 13:21:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.210 13:21:40 -- paths/export.sh@5 -- $ export PATH 00:02:29.210 13:21:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.210 13:21:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.210 13:21:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:29.210 13:21:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732108900.XXXXXX 00:02:29.210 13:21:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732108900.JtVHo1 00:02:29.210 13:21:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:29.210 13:21:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:29.210 13:21:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:29.210 13:21:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.210 13:21:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.210 13:21:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:29.210 13:21:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:29.210 13:21:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.210 13:21:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:29.210 13:21:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:29.210 13:21:40 -- pm/common@17 -- $ local monitor 00:02:29.210 13:21:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.210 13:21:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.210 13:21:40 -- pm/common@21 -- $ date +%s 00:02:29.210 13:21:40 -- pm/common@25 -- $ sleep 1 00:02:29.210 13:21:40 -- pm/common@21 -- $ date +%s 00:02:29.210 13:21:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732108900 00:02:29.210 13:21:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732108900 00:02:29.210 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732108900_collect-cpu-load.pm.log 00:02:29.210 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732108900_collect-vmstat.pm.log 00:02:30.147 13:21:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:30.147 13:21:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.147 13:21:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.147 13:21:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.147 13:21:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.147 Wed Nov 20 01:21:41 PM UTC 2024 00:02:30.147 13:21:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.147 v25.01-pre-252-gd2ebd983e 00:02:30.147 13:21:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:30.147 13:21:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.147 13:21:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.147 13:21:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.147 13:21:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.147 13:21:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.147 ************************************ 00:02:30.147 START TEST ubsan 00:02:30.147 ************************************ 00:02:30.147 using ubsan 00:02:30.148 13:21:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:30.148 00:02:30.148 real 0m0.000s 00:02:30.148 user 0m0.000s 00:02:30.148 sys 0m0.000s 00:02:30.148 13:21:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.148 13:21:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.148 ************************************ 00:02:30.148 END TEST ubsan 00:02:30.148 ************************************ 00:02:30.148 13:21:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:30.148 13:21:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.148 13:21:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:30.148 13:21:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:30.407 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:30.407 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.666 Using 'verbs' RDMA provider 00:02:46.515 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:58.720 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:58.720 Creating mk/config.mk...done. 00:02:58.720 Creating mk/cc.flags.mk...done. 00:02:58.720 Type 'make' to build. 00:02:58.720 13:22:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:58.720 13:22:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.720 13:22:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.720 13:22:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.720 ************************************ 00:02:58.720 START TEST make 00:02:58.720 ************************************ 00:02:58.720 13:22:10 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:58.979 make[1]: Nothing to be done for 'all'. 00:03:11.208 The Meson build system 00:03:11.208 Version: 1.5.0 00:03:11.208 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:11.208 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:11.208 Build type: native build 00:03:11.208 Program cat found: YES (/usr/bin/cat) 00:03:11.208 Project name: DPDK 00:03:11.208 Project version: 24.03.0 00:03:11.208 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:11.208 C linker for the host machine: cc ld.bfd 2.40-14 00:03:11.208 Host machine cpu family: x86_64 00:03:11.208 Host machine cpu: x86_64 00:03:11.208 Message: ## Building in Developer Mode ## 00:03:11.208 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.208 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.208 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.208 Program python3 found: YES (/usr/bin/python3) 00:03:11.208 Program cat found: YES (/usr/bin/cat) 00:03:11.208 Compiler for C supports arguments -march=native: YES 00:03:11.208 Checking for size of "void *" : 8 00:03:11.208 Checking for size of "void *" : 8 (cached) 00:03:11.208 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:11.208 Library m found: YES 00:03:11.208 Library numa found: YES 00:03:11.208 Has header "numaif.h" : YES 00:03:11.208 Library fdt found: NO 00:03:11.208 Library execinfo found: NO 00:03:11.208 Has header "execinfo.h" : YES 00:03:11.208 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:11.208 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.208 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.208 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.208 Run-time dependency openssl found: YES 3.1.1 00:03:11.208 Run-time dependency libpcap found: YES 1.10.4 00:03:11.208 Has header "pcap.h" with dependency libpcap: YES 00:03:11.208 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.208 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.208 Compiler for C supports arguments -Wformat: YES 00:03:11.208 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.208 Compiler for C supports arguments -Wformat-security: NO 00:03:11.208 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.208 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.208 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.208 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.208 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.208 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.208 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.208 Compiler for C supports arguments -Wundef: YES 00:03:11.208 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.208 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.208 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.208 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.208 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.208 Program objdump found: YES (/usr/bin/objdump) 00:03:11.208 Compiler for C supports arguments -mavx512f: YES 00:03:11.208 Checking if "AVX512 checking" compiles: YES 00:03:11.208 Fetching value of define "__SSE4_2__" : 1 00:03:11.208 Fetching value of define "__AES__" : 1 00:03:11.208 Fetching value of define "__AVX__" : 1 00:03:11.208 Fetching value of define "__AVX2__" : 1 00:03:11.208 Fetching value of define "__AVX512BW__" : (undefined) 00:03:11.208 Fetching value of define "__AVX512CD__" : (undefined) 00:03:11.208 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:11.208 Fetching value of define "__AVX512F__" : (undefined) 00:03:11.208 Fetching value of define "__AVX512VL__" : (undefined) 00:03:11.208 Fetching value of define "__PCLMUL__" : 1 00:03:11.208 Fetching value of define "__RDRND__" : 1 00:03:11.208 Fetching value of define "__RDSEED__" : 1 00:03:11.208 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.208 Fetching value of define "__znver1__" : (undefined) 00:03:11.208 Fetching value of define "__znver2__" : (undefined) 00:03:11.208 Fetching value of define "__znver3__" : (undefined) 00:03:11.208 Fetching value of define "__znver4__" : (undefined) 00:03:11.208 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.208 Message: lib/log: Defining dependency "log" 00:03:11.208 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.208 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.208 Checking for function "getentropy" : NO 00:03:11.208 Message: lib/eal: Defining dependency "eal" 00:03:11.208 Message: lib/ring: Defining dependency "ring" 00:03:11.208 Message: lib/rcu: Defining dependency "rcu" 00:03:11.208 Message: lib/mempool: Defining dependency "mempool" 00:03:11.208 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.208 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.208 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.208 Compiler for C supports arguments -mpclmul: YES 00:03:11.208 Compiler for C supports arguments -maes: YES 00:03:11.208 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.208 Compiler for C supports arguments -mavx512bw: YES 00:03:11.208 Compiler for C supports arguments -mavx512dq: YES 00:03:11.208 Compiler for C supports arguments -mavx512vl: YES 00:03:11.208 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.208 Compiler for C supports arguments -mavx2: YES 00:03:11.208 Compiler for C supports arguments -mavx: YES 00:03:11.208 Message: lib/net: Defining dependency "net" 00:03:11.208 Message: lib/meter: Defining dependency "meter" 00:03:11.208 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.208 Message: lib/pci: Defining dependency "pci" 00:03:11.208 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.208 Message: lib/hash: Defining dependency "hash" 00:03:11.208 Message: lib/timer: Defining dependency "timer" 00:03:11.208 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.208 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.208 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.208 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.208 Message: lib/power: Defining dependency "power" 00:03:11.208 Message: lib/reorder: Defining dependency "reorder" 00:03:11.208 Message: lib/security: Defining dependency "security" 00:03:11.208 Has header "linux/userfaultfd.h" : YES 00:03:11.208 Has header "linux/vduse.h" : YES 00:03:11.208 Message: lib/vhost: Defining dependency "vhost" 00:03:11.209 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.209 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.209 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.209 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.209 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.209 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.209 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.209 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.209 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.209 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.209 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:11.209 Configuring doxy-api-html.conf using configuration 00:03:11.209 Configuring doxy-api-man.conf using configuration 00:03:11.209 Program mandb found: YES (/usr/bin/mandb) 00:03:11.209 Program sphinx-build found: NO 00:03:11.209 Configuring rte_build_config.h using configuration 00:03:11.209 Message: 00:03:11.209 ================= 00:03:11.209 Applications Enabled 00:03:11.209 ================= 00:03:11.209 00:03:11.209 apps: 00:03:11.209 00:03:11.209 00:03:11.209 Message: 00:03:11.209 ================= 00:03:11.209 Libraries Enabled 00:03:11.209 ================= 00:03:11.209 00:03:11.209 libs: 00:03:11.209 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.209 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.209 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.209 00:03:11.209 Message: 00:03:11.209 =============== 00:03:11.209 Drivers Enabled 00:03:11.209 =============== 00:03:11.209 00:03:11.209 common: 00:03:11.209 00:03:11.209 bus: 00:03:11.209 pci, vdev, 00:03:11.209 mempool: 00:03:11.209 ring, 00:03:11.209 dma: 00:03:11.209 00:03:11.209 net: 00:03:11.209 00:03:11.209 crypto: 00:03:11.209 00:03:11.209 compress: 00:03:11.209 00:03:11.209 vdpa: 00:03:11.209 00:03:11.209 00:03:11.209 Message: 00:03:11.209 ================= 00:03:11.209 Content Skipped 00:03:11.209 ================= 00:03:11.209 00:03:11.209 apps: 00:03:11.209 dumpcap: explicitly disabled via build config 00:03:11.209 graph: explicitly disabled via build config 00:03:11.209 pdump: explicitly disabled via build config 00:03:11.209 proc-info: explicitly disabled via build config 00:03:11.209 test-acl: explicitly disabled via build config 00:03:11.209 test-bbdev: explicitly disabled via build config 00:03:11.209 test-cmdline: explicitly disabled via build config 00:03:11.209 test-compress-perf: explicitly disabled via build config 00:03:11.209 test-crypto-perf: explicitly disabled via build config 00:03:11.209 test-dma-perf: explicitly disabled via build config 00:03:11.209 test-eventdev: explicitly disabled via build config 00:03:11.209 test-fib: explicitly disabled via build config 00:03:11.209 test-flow-perf: explicitly disabled via build config 00:03:11.209 test-gpudev: explicitly disabled via build config 00:03:11.209 test-mldev: explicitly disabled via build config 00:03:11.209 test-pipeline: explicitly disabled via build config 00:03:11.209 test-pmd: explicitly disabled via build config 00:03:11.209 test-regex: explicitly disabled via build config 00:03:11.209 test-sad: explicitly disabled via build config 00:03:11.209 test-security-perf: explicitly disabled via build config 00:03:11.209 00:03:11.209 libs: 00:03:11.209 argparse: explicitly disabled via build config 00:03:11.209 metrics: explicitly disabled via build config 00:03:11.209 acl: explicitly disabled via build config 00:03:11.209 bbdev: explicitly disabled via build config 00:03:11.209 bitratestats: explicitly disabled via build config 00:03:11.209 bpf: explicitly disabled via build config 00:03:11.209 cfgfile: explicitly disabled via build config 00:03:11.209 distributor: explicitly disabled via build config 00:03:11.209 efd: explicitly disabled via build config 00:03:11.209 eventdev: explicitly disabled via build config 00:03:11.209 dispatcher: explicitly disabled via build config 00:03:11.209 gpudev: explicitly disabled via build config 00:03:11.209 gro: explicitly disabled via build config 00:03:11.209 gso: explicitly disabled via build config 00:03:11.209 ip_frag: explicitly disabled via build config 00:03:11.209 jobstats: explicitly disabled via build config 00:03:11.209 latencystats: explicitly disabled via build config 00:03:11.209 lpm: explicitly disabled via build config 00:03:11.209 member: explicitly disabled via build config 00:03:11.209 pcapng: explicitly disabled via build config 00:03:11.209 rawdev: explicitly disabled via build config 00:03:11.209 regexdev: explicitly disabled via build config 00:03:11.209 mldev: explicitly disabled via build config 00:03:11.209 rib: explicitly disabled via build config 00:03:11.209 sched: explicitly disabled via build config 00:03:11.209 stack: explicitly disabled via build config 00:03:11.209 ipsec: explicitly disabled via build config 00:03:11.209 pdcp: explicitly disabled via build config 00:03:11.209 fib: explicitly disabled via build config 00:03:11.209 port: explicitly disabled via build config 00:03:11.209 pdump: explicitly disabled via build config 00:03:11.209 table: explicitly disabled via build config 00:03:11.209 pipeline: explicitly disabled via build config 00:03:11.209 graph: explicitly disabled via build config 00:03:11.209 node: explicitly disabled via build config 00:03:11.209 00:03:11.209 drivers: 00:03:11.209 common/cpt: not in enabled drivers build config 00:03:11.209 common/dpaax: not in enabled drivers build config 00:03:11.209 common/iavf: not in enabled drivers build config 00:03:11.209 common/idpf: not in enabled drivers build config 00:03:11.209 common/ionic: not in enabled drivers build config 00:03:11.209 common/mvep: not in enabled drivers build config 00:03:11.209 common/octeontx: not in enabled drivers build config 00:03:11.209 bus/auxiliary: not in enabled drivers build config 00:03:11.209 bus/cdx: not in enabled drivers build config 00:03:11.209 bus/dpaa: not in enabled drivers build config 00:03:11.209 bus/fslmc: not in enabled drivers build config 00:03:11.209 bus/ifpga: not in enabled drivers build config 00:03:11.209 bus/platform: not in enabled drivers build config 00:03:11.209 bus/uacce: not in enabled drivers build config 00:03:11.209 bus/vmbus: not in enabled drivers build config 00:03:11.209 common/cnxk: not in enabled drivers build config 00:03:11.209 common/mlx5: not in enabled drivers build config 00:03:11.209 common/nfp: not in enabled drivers build config 00:03:11.209 common/nitrox: not in enabled drivers build config 00:03:11.209 common/qat: not in enabled drivers build config 00:03:11.209 common/sfc_efx: not in enabled drivers build config 00:03:11.209 mempool/bucket: not in enabled drivers build config 00:03:11.209 mempool/cnxk: not in enabled drivers build config 00:03:11.209 mempool/dpaa: not in enabled drivers build config 00:03:11.209 mempool/dpaa2: not in enabled drivers build config 00:03:11.209 mempool/octeontx: not in enabled drivers build config 00:03:11.209 mempool/stack: not in enabled drivers build config 00:03:11.209 dma/cnxk: not in enabled drivers build config 00:03:11.209 dma/dpaa: not in enabled drivers build config 00:03:11.209 dma/dpaa2: not in enabled drivers build config 00:03:11.210 dma/hisilicon: not in enabled drivers build config 00:03:11.210 dma/idxd: not in enabled drivers build config 00:03:11.210 dma/ioat: not in enabled drivers build config 00:03:11.210 dma/skeleton: not in enabled drivers build config 00:03:11.210 net/af_packet: not in enabled drivers build config 00:03:11.210 net/af_xdp: not in enabled drivers build config 00:03:11.210 net/ark: not in enabled drivers build config 00:03:11.210 net/atlantic: not in enabled drivers build config 00:03:11.210 net/avp: not in enabled drivers build config 00:03:11.210 net/axgbe: not in enabled drivers build config 00:03:11.210 net/bnx2x: not in enabled drivers build config 00:03:11.210 net/bnxt: not in enabled drivers build config 00:03:11.210 net/bonding: not in enabled drivers build config 00:03:11.210 net/cnxk: not in enabled drivers build config 00:03:11.210 net/cpfl: not in enabled drivers build config 00:03:11.210 net/cxgbe: not in enabled drivers build config 00:03:11.210 net/dpaa: not in enabled drivers build config 00:03:11.210 net/dpaa2: not in enabled drivers build config 00:03:11.210 net/e1000: not in enabled drivers build config 00:03:11.210 net/ena: not in enabled drivers build config 00:03:11.210 net/enetc: not in enabled drivers build config 00:03:11.210 net/enetfec: not in enabled drivers build config 00:03:11.210 net/enic: not in enabled drivers build config 00:03:11.210 net/failsafe: not in enabled drivers build config 00:03:11.210 net/fm10k: not in enabled drivers build config 00:03:11.210 net/gve: not in enabled drivers build config 00:03:11.210 net/hinic: not in enabled drivers build config 00:03:11.210 net/hns3: not in enabled drivers build config 00:03:11.210 net/i40e: not in enabled drivers build config 00:03:11.210 net/iavf: not in enabled drivers build config 00:03:11.210 net/ice: not in enabled drivers build config 00:03:11.210 net/idpf: not in enabled drivers build config 00:03:11.210 net/igc: not in enabled drivers build config 00:03:11.210 net/ionic: not in enabled drivers build config 00:03:11.210 net/ipn3ke: not in enabled drivers build config 00:03:11.210 net/ixgbe: not in enabled drivers build config 00:03:11.210 net/mana: not in enabled drivers build config 00:03:11.210 net/memif: not in enabled drivers build config 00:03:11.210 net/mlx4: not in enabled drivers build config 00:03:11.210 net/mlx5: not in enabled drivers build config 00:03:11.210 net/mvneta: not in enabled drivers build config 00:03:11.210 net/mvpp2: not in enabled drivers build config 00:03:11.210 net/netvsc: not in enabled drivers build config 00:03:11.210 net/nfb: not in enabled drivers build config 00:03:11.210 net/nfp: not in enabled drivers build config 00:03:11.210 net/ngbe: not in enabled drivers build config 00:03:11.210 net/null: not in enabled drivers build config 00:03:11.210 net/octeontx: not in enabled drivers build config 00:03:11.210 net/octeon_ep: not in enabled drivers build config 00:03:11.210 net/pcap: not in enabled drivers build config 00:03:11.210 net/pfe: not in enabled drivers build config 00:03:11.210 net/qede: not in enabled drivers build config 00:03:11.210 net/ring: not in enabled drivers build config 00:03:11.210 net/sfc: not in enabled drivers build config 00:03:11.210 net/softnic: not in enabled drivers build config 00:03:11.210 net/tap: not in enabled drivers build config 00:03:11.210 net/thunderx: not in enabled drivers build config 00:03:11.210 net/txgbe: not in enabled drivers build config 00:03:11.210 net/vdev_netvsc: not in enabled drivers build config 00:03:11.210 net/vhost: not in enabled drivers build config 00:03:11.210 net/virtio: not in enabled drivers build config 00:03:11.210 net/vmxnet3: not in enabled drivers build config 00:03:11.210 raw/*: missing internal dependency, "rawdev" 00:03:11.210 crypto/armv8: not in enabled drivers build config 00:03:11.210 crypto/bcmfs: not in enabled drivers build config 00:03:11.210 crypto/caam_jr: not in enabled drivers build config 00:03:11.210 crypto/ccp: not in enabled drivers build config 00:03:11.210 crypto/cnxk: not in enabled drivers build config 00:03:11.210 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.210 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.210 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.210 crypto/mlx5: not in enabled drivers build config 00:03:11.210 crypto/mvsam: not in enabled drivers build config 00:03:11.210 crypto/nitrox: not in enabled drivers build config 00:03:11.210 crypto/null: not in enabled drivers build config 00:03:11.210 crypto/octeontx: not in enabled drivers build config 00:03:11.210 crypto/openssl: not in enabled drivers build config 00:03:11.210 crypto/scheduler: not in enabled drivers build config 00:03:11.210 crypto/uadk: not in enabled drivers build config 00:03:11.210 crypto/virtio: not in enabled drivers build config 00:03:11.210 compress/isal: not in enabled drivers build config 00:03:11.210 compress/mlx5: not in enabled drivers build config 00:03:11.210 compress/nitrox: not in enabled drivers build config 00:03:11.210 compress/octeontx: not in enabled drivers build config 00:03:11.210 compress/zlib: not in enabled drivers build config 00:03:11.210 regex/*: missing internal dependency, "regexdev" 00:03:11.210 ml/*: missing internal dependency, "mldev" 00:03:11.210 vdpa/ifc: not in enabled drivers build config 00:03:11.210 vdpa/mlx5: not in enabled drivers build config 00:03:11.210 vdpa/nfp: not in enabled drivers build config 00:03:11.210 vdpa/sfc: not in enabled drivers build config 00:03:11.210 event/*: missing internal dependency, "eventdev" 00:03:11.210 baseband/*: missing internal dependency, "bbdev" 00:03:11.210 gpu/*: missing internal dependency, "gpudev" 00:03:11.210 00:03:11.210 00:03:11.210 Build targets in project: 85 00:03:11.210 00:03:11.210 DPDK 24.03.0 00:03:11.210 00:03:11.210 User defined options 00:03:11.210 buildtype : debug 00:03:11.210 default_library : shared 00:03:11.210 libdir : lib 00:03:11.210 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:11.210 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:11.210 c_link_args : 00:03:11.210 cpu_instruction_set: native 00:03:11.210 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:11.210 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:11.210 enable_docs : false 00:03:11.210 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:11.210 enable_kmods : false 00:03:11.210 max_lcores : 128 00:03:11.210 tests : false 00:03:11.210 00:03:11.210 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.470 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:11.728 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.728 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.728 [3/268] Linking static target lib/librte_kvargs.a 00:03:11.728 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.728 [5/268] Linking static target lib/librte_log.a 00:03:11.728 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.294 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.294 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:12.552 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:12.552 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.552 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:12.552 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:12.810 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:12.810 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:12.810 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:12.810 [16/268] Linking static target lib/librte_telemetry.a 00:03:12.810 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:12.810 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.810 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.810 [20/268] Linking target lib/librte_log.so.24.1 00:03:13.378 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:13.378 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:13.378 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:13.637 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.637 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:13.637 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:13.637 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:13.637 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.637 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:13.637 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.637 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.637 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:13.895 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:13.895 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:13.895 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.153 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:14.153 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.412 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:14.412 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.671 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:14.671 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:14.671 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:14.671 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:14.671 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:14.671 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.671 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:14.671 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:14.929 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:14.929 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:14.929 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:15.186 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:15.444 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:15.444 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:15.444 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.703 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:15.703 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:15.703 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:15.703 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:15.962 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:15.962 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:15.962 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:15.962 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:16.593 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:16.593 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:16.593 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:16.593 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:16.593 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:16.593 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:16.852 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:16.852 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:16.852 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:16.852 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.852 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:17.111 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:17.111 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:17.111 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:17.382 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:17.382 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:17.382 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:17.641 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:17.641 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.641 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:17.641 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.899 [84/268] Linking static target lib/librte_ring.a 00:03:17.899 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.899 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.899 [87/268] Linking static target lib/librte_eal.a 00:03:17.899 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:17.899 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:18.158 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:18.158 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:18.158 [92/268] Linking static target lib/librte_rcu.a 00:03:18.158 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:18.158 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.416 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:18.674 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:18.674 [97/268] Linking static target lib/librte_mempool.a 00:03:18.674 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:18.674 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:18.674 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:18.674 [101/268] Linking static target lib/librte_mbuf.a 00:03:18.674 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:18.674 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.932 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:19.190 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:19.190 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:19.190 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:19.190 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.190 [109/268] Linking static target lib/librte_net.a 00:03:19.448 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:19.448 [111/268] Linking static target lib/librte_meter.a 00:03:19.706 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:19.706 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:19.706 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.706 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.706 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.706 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.965 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:19.965 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.223 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:20.480 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:20.738 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.738 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.738 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.997 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:20.997 [126/268] Linking static target lib/librte_pci.a 00:03:20.997 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.997 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:20.997 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:21.255 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:21.255 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:21.255 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:21.255 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:21.255 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.255 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:21.255 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:21.255 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:21.255 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:21.513 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:21.513 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:21.513 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:21.514 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:21.514 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:21.514 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:21.514 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:21.514 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.514 [147/268] Linking static target lib/librte_ethdev.a 00:03:21.772 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.031 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:22.031 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.031 [151/268] Linking static target lib/librte_cmdline.a 00:03:22.031 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:22.031 [153/268] Linking static target lib/librte_timer.a 00:03:22.289 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:22.289 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:22.289 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:22.549 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:22.549 [158/268] Linking static target lib/librte_hash.a 00:03:22.549 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:22.806 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.806 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.064 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:23.064 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:23.064 [164/268] Linking static target lib/librte_compressdev.a 00:03:23.322 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:23.322 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:23.322 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:23.581 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:23.581 [169/268] Linking static target lib/librte_dmadev.a 00:03:23.581 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.581 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:23.581 [172/268] Linking static target lib/librte_cryptodev.a 00:03:23.581 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:23.839 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.839 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:23.839 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.097 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:24.097 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.356 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:24.356 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:24.356 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:24.356 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:24.356 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.614 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:24.872 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:24.872 [186/268] Linking static target lib/librte_power.a 00:03:25.131 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:25.131 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:25.131 [189/268] Linking static target lib/librte_security.a 00:03:25.131 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:25.131 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:25.131 [192/268] Linking static target lib/librte_reorder.a 00:03:25.390 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:25.649 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:25.649 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.907 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.166 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:26.166 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.166 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.166 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:26.424 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:26.424 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:26.424 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.683 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.683 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.941 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:26.941 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.941 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.941 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.941 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:27.199 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:27.199 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:27.199 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:27.199 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.199 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.199 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:27.468 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:27.468 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:27.468 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:27.468 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.468 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.468 [222/268] Linking static target drivers/librte_bus_vdev.a 00:03:27.468 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:27.468 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.468 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.468 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:27.726 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.726 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.293 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.293 [230/268] Linking static target lib/librte_vhost.a 00:03:29.229 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.229 [232/268] Linking target lib/librte_eal.so.24.1 00:03:29.487 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:29.487 [234/268] Linking target lib/librte_pci.so.24.1 00:03:29.487 [235/268] Linking target lib/librte_meter.so.24.1 00:03:29.487 [236/268] Linking target lib/librte_timer.so.24.1 00:03:29.487 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:29.487 [238/268] Linking target lib/librte_ring.so.24.1 00:03:29.487 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:29.746 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:29.746 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:29.746 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:29.746 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:29.746 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:29.746 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:29.746 [246/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.746 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:29.746 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:29.746 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.005 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:30.005 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:30.005 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:30.005 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:30.005 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:30.264 [255/268] Linking target lib/librte_net.so.24.1 00:03:30.264 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:30.264 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:30.264 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:30.264 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:30.264 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:30.264 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:30.264 [262/268] Linking target lib/librte_hash.so.24.1 00:03:30.264 [263/268] Linking target lib/librte_security.so.24.1 00:03:30.523 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:30.523 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:30.523 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:30.523 [267/268] Linking target lib/librte_power.so.24.1 00:03:30.523 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:30.523 INFO: autodetecting backend as ninja 00:03:30.523 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:57.100 CC lib/log/log.o 00:03:57.100 CC lib/ut/ut.o 00:03:57.100 CC lib/log/log_deprecated.o 00:03:57.100 CC lib/log/log_flags.o 00:03:57.100 CC lib/ut_mock/mock.o 00:03:57.100 LIB libspdk_ut_mock.a 00:03:57.100 LIB libspdk_ut.a 00:03:57.100 LIB libspdk_log.a 00:03:57.100 SO libspdk_ut_mock.so.6.0 00:03:57.100 SO libspdk_ut.so.2.0 00:03:57.100 SO libspdk_log.so.7.1 00:03:57.100 SYMLINK libspdk_ut_mock.so 00:03:57.100 SYMLINK libspdk_ut.so 00:03:57.100 SYMLINK libspdk_log.so 00:03:57.100 CC lib/dma/dma.o 00:03:57.100 CXX lib/trace_parser/trace.o 00:03:57.100 CC lib/ioat/ioat.o 00:03:57.100 CC lib/util/base64.o 00:03:57.100 CC lib/util/bit_array.o 00:03:57.100 CC lib/util/cpuset.o 00:03:57.100 CC lib/util/crc16.o 00:03:57.100 CC lib/util/crc32.o 00:03:57.100 CC lib/util/crc32c.o 00:03:57.100 CC lib/vfio_user/host/vfio_user_pci.o 00:03:57.100 CC lib/util/crc32_ieee.o 00:03:57.100 CC lib/util/crc64.o 00:03:57.100 CC lib/util/dif.o 00:03:57.100 LIB libspdk_dma.a 00:03:57.100 CC lib/vfio_user/host/vfio_user.o 00:03:57.100 SO libspdk_dma.so.5.0 00:03:57.100 CC lib/util/fd.o 00:03:57.100 CC lib/util/fd_group.o 00:03:57.100 SYMLINK libspdk_dma.so 00:03:57.100 CC lib/util/file.o 00:03:57.100 CC lib/util/hexlify.o 00:03:57.100 LIB libspdk_ioat.a 00:03:57.100 CC lib/util/iov.o 00:03:57.100 SO libspdk_ioat.so.7.0 00:03:57.100 LIB libspdk_vfio_user.a 00:03:57.100 CC lib/util/math.o 00:03:57.100 CC lib/util/net.o 00:03:57.100 SYMLINK libspdk_ioat.so 00:03:57.100 SO libspdk_vfio_user.so.5.0 00:03:57.100 CC lib/util/pipe.o 00:03:57.100 CC lib/util/strerror_tls.o 00:03:57.100 SYMLINK libspdk_vfio_user.so 00:03:57.100 CC lib/util/string.o 00:03:57.100 CC lib/util/uuid.o 00:03:57.100 CC lib/util/xor.o 00:03:57.100 CC lib/util/zipf.o 00:03:57.100 CC lib/util/md5.o 00:03:57.100 LIB libspdk_util.a 00:03:57.100 SO libspdk_util.so.10.1 00:03:57.100 LIB libspdk_trace_parser.a 00:03:57.100 SYMLINK libspdk_util.so 00:03:57.100 SO libspdk_trace_parser.so.6.0 00:03:57.100 SYMLINK libspdk_trace_parser.so 00:03:57.100 CC lib/vmd/vmd.o 00:03:57.100 CC lib/vmd/led.o 00:03:57.100 CC lib/json/json_parse.o 00:03:57.100 CC lib/json/json_util.o 00:03:57.100 CC lib/rdma_utils/rdma_utils.o 00:03:57.100 CC lib/json/json_write.o 00:03:57.100 CC lib/env_dpdk/env.o 00:03:57.100 CC lib/env_dpdk/memory.o 00:03:57.100 CC lib/idxd/idxd.o 00:03:57.100 CC lib/conf/conf.o 00:03:57.100 CC lib/env_dpdk/pci.o 00:03:57.100 CC lib/env_dpdk/init.o 00:03:57.100 CC lib/env_dpdk/threads.o 00:03:57.100 LIB libspdk_conf.a 00:03:57.100 SO libspdk_conf.so.6.0 00:03:57.100 LIB libspdk_rdma_utils.a 00:03:57.100 SO libspdk_rdma_utils.so.1.0 00:03:57.100 SYMLINK libspdk_conf.so 00:03:57.100 LIB libspdk_json.a 00:03:57.100 CC lib/env_dpdk/pci_ioat.o 00:03:57.100 SYMLINK libspdk_rdma_utils.so 00:03:57.100 CC lib/env_dpdk/pci_virtio.o 00:03:57.100 SO libspdk_json.so.6.0 00:03:57.100 CC lib/env_dpdk/pci_vmd.o 00:03:57.100 SYMLINK libspdk_json.so 00:03:57.100 CC lib/env_dpdk/pci_idxd.o 00:03:57.100 CC lib/env_dpdk/pci_event.o 00:03:57.100 CC lib/idxd/idxd_user.o 00:03:57.100 CC lib/env_dpdk/sigbus_handler.o 00:03:57.100 CC lib/env_dpdk/pci_dpdk.o 00:03:57.100 CC lib/idxd/idxd_kernel.o 00:03:57.100 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.100 LIB libspdk_vmd.a 00:03:57.100 SO libspdk_vmd.so.6.0 00:03:57.100 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.100 SYMLINK libspdk_vmd.so 00:03:57.101 LIB libspdk_idxd.a 00:03:57.101 CC lib/rdma_provider/common.o 00:03:57.101 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:57.101 SO libspdk_idxd.so.12.1 00:03:57.101 SYMLINK libspdk_idxd.so 00:03:57.101 CC lib/jsonrpc/jsonrpc_server.o 00:03:57.101 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:57.101 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:57.101 CC lib/jsonrpc/jsonrpc_client.o 00:03:57.101 LIB libspdk_rdma_provider.a 00:03:57.101 SO libspdk_rdma_provider.so.7.0 00:03:57.101 SYMLINK libspdk_rdma_provider.so 00:03:57.101 LIB libspdk_jsonrpc.a 00:03:57.101 SO libspdk_jsonrpc.so.6.0 00:03:57.360 SYMLINK libspdk_jsonrpc.so 00:03:57.360 LIB libspdk_env_dpdk.a 00:03:57.360 SO libspdk_env_dpdk.so.15.1 00:03:57.618 CC lib/rpc/rpc.o 00:03:57.618 SYMLINK libspdk_env_dpdk.so 00:03:57.876 LIB libspdk_rpc.a 00:03:57.876 SO libspdk_rpc.so.6.0 00:03:57.876 SYMLINK libspdk_rpc.so 00:03:58.133 CC lib/keyring/keyring.o 00:03:58.133 CC lib/keyring/keyring_rpc.o 00:03:58.133 CC lib/trace/trace.o 00:03:58.133 CC lib/trace/trace_flags.o 00:03:58.133 CC lib/trace/trace_rpc.o 00:03:58.133 CC lib/notify/notify.o 00:03:58.133 CC lib/notify/notify_rpc.o 00:03:58.391 LIB libspdk_notify.a 00:03:58.391 SO libspdk_notify.so.6.0 00:03:58.391 LIB libspdk_keyring.a 00:03:58.391 LIB libspdk_trace.a 00:03:58.649 SYMLINK libspdk_notify.so 00:03:58.649 SO libspdk_keyring.so.2.0 00:03:58.649 SO libspdk_trace.so.11.0 00:03:58.649 SYMLINK libspdk_keyring.so 00:03:58.649 SYMLINK libspdk_trace.so 00:03:58.906 CC lib/thread/thread.o 00:03:58.906 CC lib/thread/iobuf.o 00:03:58.906 CC lib/sock/sock.o 00:03:58.906 CC lib/sock/sock_rpc.o 00:03:59.472 LIB libspdk_sock.a 00:03:59.472 SO libspdk_sock.so.10.0 00:03:59.472 SYMLINK libspdk_sock.so 00:03:59.730 CC lib/nvme/nvme_ctrlr.o 00:03:59.730 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:59.730 CC lib/nvme/nvme_ns_cmd.o 00:03:59.730 CC lib/nvme/nvme_fabric.o 00:03:59.730 CC lib/nvme/nvme_ns.o 00:03:59.730 CC lib/nvme/nvme_pcie.o 00:03:59.730 CC lib/nvme/nvme_pcie_common.o 00:03:59.730 CC lib/nvme/nvme_qpair.o 00:03:59.730 CC lib/nvme/nvme.o 00:04:00.696 LIB libspdk_thread.a 00:04:00.696 SO libspdk_thread.so.11.0 00:04:00.696 SYMLINK libspdk_thread.so 00:04:00.696 CC lib/nvme/nvme_quirks.o 00:04:00.696 CC lib/nvme/nvme_transport.o 00:04:00.696 CC lib/nvme/nvme_discovery.o 00:04:00.696 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:00.696 CC lib/accel/accel.o 00:04:00.696 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:00.955 CC lib/nvme/nvme_tcp.o 00:04:00.955 CC lib/blob/blobstore.o 00:04:00.955 CC lib/blob/request.o 00:04:01.214 CC lib/blob/zeroes.o 00:04:01.214 CC lib/blob/blob_bs_dev.o 00:04:01.473 CC lib/accel/accel_rpc.o 00:04:01.473 CC lib/accel/accel_sw.o 00:04:01.473 CC lib/nvme/nvme_opal.o 00:04:01.473 CC lib/nvme/nvme_io_msg.o 00:04:01.473 CC lib/nvme/nvme_poll_group.o 00:04:01.473 CC lib/nvme/nvme_zns.o 00:04:01.473 CC lib/nvme/nvme_stubs.o 00:04:01.732 CC lib/init/json_config.o 00:04:01.991 LIB libspdk_accel.a 00:04:01.991 SO libspdk_accel.so.16.0 00:04:01.991 SYMLINK libspdk_accel.so 00:04:01.991 CC lib/init/subsystem.o 00:04:01.991 CC lib/init/subsystem_rpc.o 00:04:01.991 CC lib/init/rpc.o 00:04:02.251 CC lib/virtio/virtio.o 00:04:02.251 CC lib/nvme/nvme_auth.o 00:04:02.251 CC lib/nvme/nvme_cuse.o 00:04:02.251 CC lib/nvme/nvme_rdma.o 00:04:02.251 CC lib/virtio/virtio_vhost_user.o 00:04:02.251 LIB libspdk_init.a 00:04:02.251 SO libspdk_init.so.6.0 00:04:02.509 CC lib/fsdev/fsdev.o 00:04:02.509 CC lib/virtio/virtio_vfio_user.o 00:04:02.509 SYMLINK libspdk_init.so 00:04:02.509 CC lib/virtio/virtio_pci.o 00:04:02.509 CC lib/bdev/bdev.o 00:04:02.509 CC lib/bdev/bdev_rpc.o 00:04:02.509 CC lib/event/app.o 00:04:02.768 CC lib/event/reactor.o 00:04:02.768 LIB libspdk_virtio.a 00:04:02.768 SO libspdk_virtio.so.7.0 00:04:02.768 SYMLINK libspdk_virtio.so 00:04:02.768 CC lib/fsdev/fsdev_io.o 00:04:02.768 CC lib/event/log_rpc.o 00:04:03.026 CC lib/fsdev/fsdev_rpc.o 00:04:03.026 CC lib/event/app_rpc.o 00:04:03.026 CC lib/event/scheduler_static.o 00:04:03.026 CC lib/bdev/bdev_zone.o 00:04:03.285 CC lib/bdev/part.o 00:04:03.285 CC lib/bdev/scsi_nvme.o 00:04:03.285 LIB libspdk_fsdev.a 00:04:03.285 SO libspdk_fsdev.so.2.0 00:04:03.285 LIB libspdk_event.a 00:04:03.285 SYMLINK libspdk_fsdev.so 00:04:03.285 SO libspdk_event.so.14.0 00:04:03.544 SYMLINK libspdk_event.so 00:04:03.544 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:03.544 LIB libspdk_nvme.a 00:04:03.804 SO libspdk_nvme.so.15.0 00:04:04.063 LIB libspdk_blob.a 00:04:04.063 SO libspdk_blob.so.11.0 00:04:04.063 SYMLINK libspdk_nvme.so 00:04:04.063 SYMLINK libspdk_blob.so 00:04:04.321 LIB libspdk_fuse_dispatcher.a 00:04:04.321 SO libspdk_fuse_dispatcher.so.1.0 00:04:04.321 SYMLINK libspdk_fuse_dispatcher.so 00:04:04.322 CC lib/lvol/lvol.o 00:04:04.322 CC lib/blobfs/blobfs.o 00:04:04.322 CC lib/blobfs/tree.o 00:04:05.258 LIB libspdk_blobfs.a 00:04:05.258 SO libspdk_blobfs.so.10.0 00:04:05.517 LIB libspdk_bdev.a 00:04:05.517 SYMLINK libspdk_blobfs.so 00:04:05.517 LIB libspdk_lvol.a 00:04:05.517 SO libspdk_lvol.so.10.0 00:04:05.517 SO libspdk_bdev.so.17.0 00:04:05.517 SYMLINK libspdk_lvol.so 00:04:05.517 SYMLINK libspdk_bdev.so 00:04:05.775 CC lib/nvmf/ctrlr.o 00:04:05.775 CC lib/scsi/dev.o 00:04:05.775 CC lib/nvmf/ctrlr_discovery.o 00:04:05.775 CC lib/scsi/lun.o 00:04:05.775 CC lib/nvmf/subsystem.o 00:04:05.775 CC lib/nvmf/ctrlr_bdev.o 00:04:05.775 CC lib/nvmf/nvmf.o 00:04:05.775 CC lib/nbd/nbd.o 00:04:05.775 CC lib/ublk/ublk.o 00:04:05.775 CC lib/ftl/ftl_core.o 00:04:06.034 CC lib/scsi/port.o 00:04:06.034 CC lib/ftl/ftl_init.o 00:04:06.293 CC lib/scsi/scsi.o 00:04:06.293 CC lib/scsi/scsi_bdev.o 00:04:06.294 CC lib/nbd/nbd_rpc.o 00:04:06.294 CC lib/ftl/ftl_layout.o 00:04:06.294 CC lib/scsi/scsi_pr.o 00:04:06.294 CC lib/ublk/ublk_rpc.o 00:04:06.552 CC lib/nvmf/nvmf_rpc.o 00:04:06.552 LIB libspdk_nbd.a 00:04:06.552 SO libspdk_nbd.so.7.0 00:04:06.552 CC lib/nvmf/transport.o 00:04:06.552 LIB libspdk_ublk.a 00:04:06.552 SYMLINK libspdk_nbd.so 00:04:06.552 SO libspdk_ublk.so.3.0 00:04:06.552 CC lib/nvmf/tcp.o 00:04:06.552 CC lib/ftl/ftl_debug.o 00:04:06.811 SYMLINK libspdk_ublk.so 00:04:06.811 CC lib/scsi/scsi_rpc.o 00:04:06.811 CC lib/nvmf/stubs.o 00:04:06.811 CC lib/nvmf/mdns_server.o 00:04:06.811 CC lib/nvmf/rdma.o 00:04:06.811 CC lib/scsi/task.o 00:04:06.811 CC lib/ftl/ftl_io.o 00:04:07.070 LIB libspdk_scsi.a 00:04:07.070 CC lib/nvmf/auth.o 00:04:07.070 CC lib/ftl/ftl_sb.o 00:04:07.070 SO libspdk_scsi.so.9.0 00:04:07.070 CC lib/ftl/ftl_l2p.o 00:04:07.330 CC lib/ftl/ftl_l2p_flat.o 00:04:07.330 CC lib/ftl/ftl_nv_cache.o 00:04:07.330 SYMLINK libspdk_scsi.so 00:04:07.330 CC lib/ftl/ftl_band.o 00:04:07.330 CC lib/ftl/ftl_band_ops.o 00:04:07.330 CC lib/ftl/ftl_writer.o 00:04:07.330 CC lib/ftl/ftl_rq.o 00:04:07.587 CC lib/iscsi/conn.o 00:04:07.587 CC lib/ftl/ftl_reloc.o 00:04:07.587 CC lib/ftl/ftl_l2p_cache.o 00:04:07.587 CC lib/ftl/ftl_p2l.o 00:04:07.587 CC lib/ftl/ftl_p2l_log.o 00:04:07.587 CC lib/vhost/vhost.o 00:04:07.846 CC lib/vhost/vhost_rpc.o 00:04:07.846 CC lib/vhost/vhost_scsi.o 00:04:07.846 CC lib/ftl/mngt/ftl_mngt.o 00:04:08.104 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:08.104 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:08.104 CC lib/vhost/vhost_blk.o 00:04:08.104 CC lib/iscsi/init_grp.o 00:04:08.104 CC lib/vhost/rte_vhost_user.o 00:04:08.362 CC lib/iscsi/iscsi.o 00:04:08.362 CC lib/iscsi/param.o 00:04:08.362 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:08.362 CC lib/iscsi/portal_grp.o 00:04:08.621 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:08.621 CC lib/iscsi/tgt_node.o 00:04:08.621 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:08.621 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:08.878 CC lib/iscsi/iscsi_subsystem.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:08.878 LIB libspdk_nvmf.a 00:04:09.137 CC lib/iscsi/iscsi_rpc.o 00:04:09.137 SO libspdk_nvmf.so.20.0 00:04:09.137 CC lib/iscsi/task.o 00:04:09.137 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:09.137 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:09.137 CC lib/ftl/utils/ftl_conf.o 00:04:09.137 CC lib/ftl/utils/ftl_md.o 00:04:09.397 CC lib/ftl/utils/ftl_mempool.o 00:04:09.397 LIB libspdk_vhost.a 00:04:09.397 SYMLINK libspdk_nvmf.so 00:04:09.397 CC lib/ftl/utils/ftl_bitmap.o 00:04:09.397 SO libspdk_vhost.so.8.0 00:04:09.397 CC lib/ftl/utils/ftl_property.o 00:04:09.397 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:09.397 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:09.397 SYMLINK libspdk_vhost.so 00:04:09.397 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:09.397 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:09.658 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:09.658 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:09.658 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:09.658 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:09.658 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:09.917 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:09.917 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:09.917 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:09.917 LIB libspdk_iscsi.a 00:04:09.917 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:09.917 CC lib/ftl/base/ftl_base_dev.o 00:04:09.917 CC lib/ftl/base/ftl_base_bdev.o 00:04:09.917 CC lib/ftl/ftl_trace.o 00:04:09.917 SO libspdk_iscsi.so.8.0 00:04:10.177 SYMLINK libspdk_iscsi.so 00:04:10.177 LIB libspdk_ftl.a 00:04:10.435 SO libspdk_ftl.so.9.0 00:04:11.001 SYMLINK libspdk_ftl.so 00:04:11.260 CC module/env_dpdk/env_dpdk_rpc.o 00:04:11.260 CC module/blob/bdev/blob_bdev.o 00:04:11.260 CC module/sock/posix/posix.o 00:04:11.260 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:11.260 CC module/scheduler/gscheduler/gscheduler.o 00:04:11.260 CC module/fsdev/aio/fsdev_aio.o 00:04:11.260 CC module/keyring/file/keyring.o 00:04:11.260 CC module/accel/error/accel_error.o 00:04:11.260 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:11.260 CC module/sock/uring/uring.o 00:04:11.260 LIB libspdk_env_dpdk_rpc.a 00:04:11.260 SO libspdk_env_dpdk_rpc.so.6.0 00:04:11.518 SYMLINK libspdk_env_dpdk_rpc.so 00:04:11.518 CC module/keyring/file/keyring_rpc.o 00:04:11.518 LIB libspdk_scheduler_gscheduler.a 00:04:11.518 LIB libspdk_scheduler_dpdk_governor.a 00:04:11.518 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:11.518 LIB libspdk_scheduler_dynamic.a 00:04:11.518 SO libspdk_scheduler_gscheduler.so.4.0 00:04:11.518 CC module/accel/error/accel_error_rpc.o 00:04:11.518 SO libspdk_scheduler_dynamic.so.4.0 00:04:11.518 LIB libspdk_blob_bdev.a 00:04:11.518 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:11.518 SYMLINK libspdk_scheduler_gscheduler.so 00:04:11.518 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:11.518 CC module/fsdev/aio/linux_aio_mgr.o 00:04:11.518 SYMLINK libspdk_scheduler_dynamic.so 00:04:11.518 LIB libspdk_keyring_file.a 00:04:11.518 SO libspdk_blob_bdev.so.11.0 00:04:11.518 SO libspdk_keyring_file.so.2.0 00:04:11.778 SYMLINK libspdk_blob_bdev.so 00:04:11.778 CC module/keyring/linux/keyring.o 00:04:11.778 LIB libspdk_accel_error.a 00:04:11.778 SYMLINK libspdk_keyring_file.so 00:04:11.778 SO libspdk_accel_error.so.2.0 00:04:11.778 CC module/accel/ioat/accel_ioat.o 00:04:11.778 SYMLINK libspdk_accel_error.so 00:04:11.778 CC module/keyring/linux/keyring_rpc.o 00:04:11.778 CC module/accel/dsa/accel_dsa.o 00:04:12.036 LIB libspdk_fsdev_aio.a 00:04:12.036 CC module/bdev/delay/vbdev_delay.o 00:04:12.036 CC module/bdev/error/vbdev_error.o 00:04:12.036 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.036 SO libspdk_fsdev_aio.so.1.0 00:04:12.036 CC module/accel/iaa/accel_iaa.o 00:04:12.036 LIB libspdk_sock_uring.a 00:04:12.036 LIB libspdk_keyring_linux.a 00:04:12.036 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.036 SO libspdk_sock_uring.so.5.0 00:04:12.036 SO libspdk_keyring_linux.so.1.0 00:04:12.036 LIB libspdk_sock_posix.a 00:04:12.036 SYMLINK libspdk_fsdev_aio.so 00:04:12.036 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.036 SYMLINK libspdk_sock_uring.so 00:04:12.036 SO libspdk_sock_posix.so.6.0 00:04:12.036 SYMLINK libspdk_keyring_linux.so 00:04:12.036 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.036 CC module/bdev/error/vbdev_error_rpc.o 00:04:12.036 SYMLINK libspdk_sock_posix.so 00:04:12.036 LIB libspdk_accel_ioat.a 00:04:12.036 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:12.295 SO libspdk_accel_ioat.so.6.0 00:04:12.295 LIB libspdk_accel_dsa.a 00:04:12.295 SO libspdk_accel_dsa.so.5.0 00:04:12.295 LIB libspdk_accel_iaa.a 00:04:12.295 SYMLINK libspdk_accel_ioat.so 00:04:12.295 SO libspdk_accel_iaa.so.3.0 00:04:12.295 LIB libspdk_bdev_error.a 00:04:12.295 SYMLINK libspdk_accel_dsa.so 00:04:12.295 SO libspdk_bdev_error.so.6.0 00:04:12.295 CC module/bdev/gpt/gpt.o 00:04:12.295 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:12.295 LIB libspdk_blobfs_bdev.a 00:04:12.295 SYMLINK libspdk_accel_iaa.so 00:04:12.295 CC module/bdev/lvol/vbdev_lvol.o 00:04:12.295 CC module/bdev/malloc/bdev_malloc.o 00:04:12.295 SO libspdk_blobfs_bdev.so.6.0 00:04:12.295 CC module/bdev/null/bdev_null.o 00:04:12.295 SYMLINK libspdk_bdev_error.so 00:04:12.295 CC module/bdev/null/bdev_null_rpc.o 00:04:12.553 CC module/bdev/nvme/bdev_nvme.o 00:04:12.553 SYMLINK libspdk_blobfs_bdev.so 00:04:12.553 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:12.553 CC module/bdev/raid/bdev_raid.o 00:04:12.553 CC module/bdev/passthru/vbdev_passthru.o 00:04:12.553 LIB libspdk_bdev_delay.a 00:04:12.553 CC module/bdev/gpt/vbdev_gpt.o 00:04:12.553 SO libspdk_bdev_delay.so.6.0 00:04:12.553 CC module/bdev/raid/bdev_raid_rpc.o 00:04:12.553 SYMLINK libspdk_bdev_delay.so 00:04:12.553 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:12.553 LIB libspdk_bdev_null.a 00:04:12.811 SO libspdk_bdev_null.so.6.0 00:04:12.811 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:12.811 SYMLINK libspdk_bdev_null.so 00:04:12.811 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:12.811 CC module/bdev/nvme/nvme_rpc.o 00:04:12.811 LIB libspdk_bdev_gpt.a 00:04:12.811 CC module/bdev/nvme/bdev_mdns_client.o 00:04:12.811 CC module/bdev/nvme/vbdev_opal.o 00:04:12.811 SO libspdk_bdev_gpt.so.6.0 00:04:12.811 LIB libspdk_bdev_lvol.a 00:04:12.811 SYMLINK libspdk_bdev_gpt.so 00:04:12.811 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:12.811 SO libspdk_bdev_lvol.so.6.0 00:04:12.811 LIB libspdk_bdev_malloc.a 00:04:13.070 LIB libspdk_bdev_passthru.a 00:04:13.070 SO libspdk_bdev_malloc.so.6.0 00:04:13.070 SO libspdk_bdev_passthru.so.6.0 00:04:13.070 SYMLINK libspdk_bdev_lvol.so 00:04:13.070 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:13.070 SYMLINK libspdk_bdev_malloc.so 00:04:13.070 SYMLINK libspdk_bdev_passthru.so 00:04:13.329 CC module/bdev/split/vbdev_split.o 00:04:13.329 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.329 CC module/bdev/uring/bdev_uring.o 00:04:13.329 CC module/bdev/aio/bdev_aio.o 00:04:13.329 CC module/bdev/ftl/bdev_ftl.o 00:04:13.329 CC module/bdev/iscsi/bdev_iscsi.o 00:04:13.329 CC module/bdev/aio/bdev_aio_rpc.o 00:04:13.329 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:13.588 CC module/bdev/split/vbdev_split_rpc.o 00:04:13.588 CC module/bdev/raid/bdev_raid_sb.o 00:04:13.588 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:13.588 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.588 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:13.588 LIB libspdk_bdev_aio.a 00:04:13.588 CC module/bdev/uring/bdev_uring_rpc.o 00:04:13.588 SO libspdk_bdev_aio.so.6.0 00:04:13.847 LIB libspdk_bdev_split.a 00:04:13.847 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:13.847 LIB libspdk_bdev_zone_block.a 00:04:13.847 SYMLINK libspdk_bdev_aio.so 00:04:13.847 SO libspdk_bdev_split.so.6.0 00:04:13.847 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:13.847 SO libspdk_bdev_zone_block.so.6.0 00:04:13.847 LIB libspdk_bdev_ftl.a 00:04:13.847 LIB libspdk_bdev_uring.a 00:04:13.847 SO libspdk_bdev_ftl.so.6.0 00:04:13.847 CC module/bdev/raid/raid0.o 00:04:13.847 SYMLINK libspdk_bdev_split.so 00:04:13.847 SO libspdk_bdev_uring.so.6.0 00:04:13.847 SYMLINK libspdk_bdev_zone_block.so 00:04:13.847 CC module/bdev/raid/raid1.o 00:04:13.847 CC module/bdev/raid/concat.o 00:04:13.847 SYMLINK libspdk_bdev_ftl.so 00:04:13.847 SYMLINK libspdk_bdev_uring.so 00:04:13.847 LIB libspdk_bdev_iscsi.a 00:04:13.847 SO libspdk_bdev_iscsi.so.6.0 00:04:14.107 LIB libspdk_bdev_virtio.a 00:04:14.107 SYMLINK libspdk_bdev_iscsi.so 00:04:14.107 SO libspdk_bdev_virtio.so.6.0 00:04:14.107 SYMLINK libspdk_bdev_virtio.so 00:04:14.107 LIB libspdk_bdev_raid.a 00:04:14.366 SO libspdk_bdev_raid.so.6.0 00:04:14.366 SYMLINK libspdk_bdev_raid.so 00:04:15.303 LIB libspdk_bdev_nvme.a 00:04:15.303 SO libspdk_bdev_nvme.so.7.1 00:04:15.303 SYMLINK libspdk_bdev_nvme.so 00:04:15.870 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:15.870 CC module/event/subsystems/iobuf/iobuf.o 00:04:15.870 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:15.870 CC module/event/subsystems/sock/sock.o 00:04:15.870 CC module/event/subsystems/vmd/vmd.o 00:04:15.870 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:15.870 CC module/event/subsystems/keyring/keyring.o 00:04:15.870 CC module/event/subsystems/fsdev/fsdev.o 00:04:15.870 CC module/event/subsystems/scheduler/scheduler.o 00:04:15.870 LIB libspdk_event_fsdev.a 00:04:16.129 LIB libspdk_event_vhost_blk.a 00:04:16.129 LIB libspdk_event_keyring.a 00:04:16.129 LIB libspdk_event_sock.a 00:04:16.129 SO libspdk_event_fsdev.so.1.0 00:04:16.129 SO libspdk_event_vhost_blk.so.3.0 00:04:16.129 SO libspdk_event_sock.so.5.0 00:04:16.129 SO libspdk_event_keyring.so.1.0 00:04:16.129 LIB libspdk_event_scheduler.a 00:04:16.129 LIB libspdk_event_iobuf.a 00:04:16.129 LIB libspdk_event_vmd.a 00:04:16.129 SYMLINK libspdk_event_fsdev.so 00:04:16.129 SO libspdk_event_scheduler.so.4.0 00:04:16.129 SO libspdk_event_iobuf.so.3.0 00:04:16.129 SYMLINK libspdk_event_vhost_blk.so 00:04:16.129 SO libspdk_event_vmd.so.6.0 00:04:16.129 SYMLINK libspdk_event_sock.so 00:04:16.129 SYMLINK libspdk_event_keyring.so 00:04:16.129 SYMLINK libspdk_event_scheduler.so 00:04:16.129 SYMLINK libspdk_event_iobuf.so 00:04:16.129 SYMLINK libspdk_event_vmd.so 00:04:16.484 CC module/event/subsystems/accel/accel.o 00:04:16.484 LIB libspdk_event_accel.a 00:04:16.751 SO libspdk_event_accel.so.6.0 00:04:16.751 SYMLINK libspdk_event_accel.so 00:04:17.010 CC module/event/subsystems/bdev/bdev.o 00:04:17.269 LIB libspdk_event_bdev.a 00:04:17.269 SO libspdk_event_bdev.so.6.0 00:04:17.269 SYMLINK libspdk_event_bdev.so 00:04:17.528 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:17.528 CC module/event/subsystems/ublk/ublk.o 00:04:17.528 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:17.528 CC module/event/subsystems/nbd/nbd.o 00:04:17.528 CC module/event/subsystems/scsi/scsi.o 00:04:17.787 LIB libspdk_event_nbd.a 00:04:17.787 LIB libspdk_event_ublk.a 00:04:17.787 LIB libspdk_event_scsi.a 00:04:17.787 SO libspdk_event_nbd.so.6.0 00:04:17.787 SO libspdk_event_ublk.so.3.0 00:04:17.787 SO libspdk_event_scsi.so.6.0 00:04:17.787 SYMLINK libspdk_event_nbd.so 00:04:17.787 LIB libspdk_event_nvmf.a 00:04:17.787 SYMLINK libspdk_event_ublk.so 00:04:17.787 SYMLINK libspdk_event_scsi.so 00:04:17.787 SO libspdk_event_nvmf.so.6.0 00:04:17.787 SYMLINK libspdk_event_nvmf.so 00:04:18.046 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.046 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.304 LIB libspdk_event_vhost_scsi.a 00:04:18.304 SO libspdk_event_vhost_scsi.so.3.0 00:04:18.304 LIB libspdk_event_iscsi.a 00:04:18.304 SO libspdk_event_iscsi.so.6.0 00:04:18.304 SYMLINK libspdk_event_vhost_scsi.so 00:04:18.304 SYMLINK libspdk_event_iscsi.so 00:04:18.563 SO libspdk.so.6.0 00:04:18.563 SYMLINK libspdk.so 00:04:18.822 CC app/spdk_lspci/spdk_lspci.o 00:04:18.822 CC app/trace_record/trace_record.o 00:04:18.822 CXX app/trace/trace.o 00:04:18.822 CC app/iscsi_tgt/iscsi_tgt.o 00:04:18.822 CC app/nvmf_tgt/nvmf_main.o 00:04:18.822 CC app/spdk_tgt/spdk_tgt.o 00:04:18.822 CC examples/ioat/perf/perf.o 00:04:18.822 CC test/thread/poller_perf/poller_perf.o 00:04:18.822 CC examples/util/zipf/zipf.o 00:04:19.080 CC test/dma/test_dma/test_dma.o 00:04:19.080 LINK spdk_lspci 00:04:19.080 LINK poller_perf 00:04:19.080 LINK spdk_trace_record 00:04:19.080 LINK nvmf_tgt 00:04:19.080 LINK iscsi_tgt 00:04:19.080 LINK spdk_tgt 00:04:19.080 LINK zipf 00:04:19.339 LINK ioat_perf 00:04:19.339 LINK spdk_trace 00:04:19.339 CC app/spdk_nvme_perf/perf.o 00:04:19.339 CC app/spdk_nvme_identify/identify.o 00:04:19.339 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:19.339 CC app/spdk_nvme_discover/discovery_aer.o 00:04:19.599 CC app/spdk_top/spdk_top.o 00:04:19.599 CC examples/ioat/verify/verify.o 00:04:19.599 LINK test_dma 00:04:19.599 CC app/spdk_dd/spdk_dd.o 00:04:19.599 CC examples/sock/hello_world/hello_sock.o 00:04:19.599 LINK interrupt_tgt 00:04:19.599 CC examples/thread/thread/thread_ex.o 00:04:19.599 LINK spdk_nvme_discover 00:04:19.599 LINK verify 00:04:19.858 LINK hello_sock 00:04:19.858 CC test/app/bdev_svc/bdev_svc.o 00:04:19.858 LINK thread 00:04:19.858 CC test/app/histogram_perf/histogram_perf.o 00:04:20.118 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.118 CC app/fio/nvme/fio_plugin.o 00:04:20.118 LINK spdk_dd 00:04:20.118 LINK histogram_perf 00:04:20.118 LINK bdev_svc 00:04:20.118 CC app/fio/bdev/fio_plugin.o 00:04:20.118 LINK spdk_nvme_identify 00:04:20.118 LINK spdk_nvme_perf 00:04:20.377 CC examples/vmd/lsvmd/lsvmd.o 00:04:20.377 CC test/app/jsoncat/jsoncat.o 00:04:20.377 CC test/app/stub/stub.o 00:04:20.377 LINK spdk_top 00:04:20.377 LINK nvme_fuzz 00:04:20.377 CC examples/vmd/led/led.o 00:04:20.377 LINK lsvmd 00:04:20.377 CC examples/idxd/perf/perf.o 00:04:20.635 LINK jsoncat 00:04:20.636 CC app/vhost/vhost.o 00:04:20.636 LINK spdk_nvme 00:04:20.636 LINK stub 00:04:20.636 LINK led 00:04:20.636 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.636 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:20.636 LINK spdk_bdev 00:04:20.636 TEST_HEADER include/spdk/accel.h 00:04:20.895 TEST_HEADER include/spdk/accel_module.h 00:04:20.895 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:20.895 TEST_HEADER include/spdk/assert.h 00:04:20.895 TEST_HEADER include/spdk/barrier.h 00:04:20.895 LINK vhost 00:04:20.895 TEST_HEADER include/spdk/base64.h 00:04:20.895 TEST_HEADER include/spdk/bdev.h 00:04:20.895 TEST_HEADER include/spdk/bdev_module.h 00:04:20.895 TEST_HEADER include/spdk/bdev_zone.h 00:04:20.895 TEST_HEADER include/spdk/bit_array.h 00:04:20.895 TEST_HEADER include/spdk/bit_pool.h 00:04:20.895 TEST_HEADER include/spdk/blob_bdev.h 00:04:20.895 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:20.895 TEST_HEADER include/spdk/blobfs.h 00:04:20.895 TEST_HEADER include/spdk/blob.h 00:04:20.895 TEST_HEADER include/spdk/conf.h 00:04:20.895 TEST_HEADER include/spdk/config.h 00:04:20.895 TEST_HEADER include/spdk/cpuset.h 00:04:20.895 TEST_HEADER include/spdk/crc16.h 00:04:20.895 TEST_HEADER include/spdk/crc32.h 00:04:20.895 TEST_HEADER include/spdk/crc64.h 00:04:20.895 TEST_HEADER include/spdk/dif.h 00:04:20.895 TEST_HEADER include/spdk/dma.h 00:04:20.895 TEST_HEADER include/spdk/endian.h 00:04:20.895 TEST_HEADER include/spdk/env_dpdk.h 00:04:20.895 TEST_HEADER include/spdk/env.h 00:04:20.895 TEST_HEADER include/spdk/event.h 00:04:20.895 TEST_HEADER include/spdk/fd_group.h 00:04:20.895 TEST_HEADER include/spdk/fd.h 00:04:20.895 TEST_HEADER include/spdk/file.h 00:04:20.895 TEST_HEADER include/spdk/fsdev.h 00:04:20.895 TEST_HEADER include/spdk/fsdev_module.h 00:04:20.895 TEST_HEADER include/spdk/ftl.h 00:04:20.895 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:20.895 TEST_HEADER include/spdk/gpt_spec.h 00:04:20.895 TEST_HEADER include/spdk/hexlify.h 00:04:20.895 TEST_HEADER include/spdk/histogram_data.h 00:04:20.895 TEST_HEADER include/spdk/idxd.h 00:04:20.895 TEST_HEADER include/spdk/idxd_spec.h 00:04:20.895 TEST_HEADER include/spdk/init.h 00:04:20.895 TEST_HEADER include/spdk/ioat.h 00:04:20.895 TEST_HEADER include/spdk/ioat_spec.h 00:04:20.895 TEST_HEADER include/spdk/iscsi_spec.h 00:04:20.895 TEST_HEADER include/spdk/json.h 00:04:20.895 TEST_HEADER include/spdk/jsonrpc.h 00:04:20.895 TEST_HEADER include/spdk/keyring.h 00:04:20.895 TEST_HEADER include/spdk/keyring_module.h 00:04:20.895 LINK idxd_perf 00:04:20.895 TEST_HEADER include/spdk/likely.h 00:04:20.895 TEST_HEADER include/spdk/log.h 00:04:20.895 TEST_HEADER include/spdk/lvol.h 00:04:20.895 TEST_HEADER include/spdk/md5.h 00:04:20.895 TEST_HEADER include/spdk/memory.h 00:04:20.895 TEST_HEADER include/spdk/mmio.h 00:04:20.895 TEST_HEADER include/spdk/nbd.h 00:04:20.895 CC test/blobfs/mkfs/mkfs.o 00:04:20.895 TEST_HEADER include/spdk/net.h 00:04:20.895 TEST_HEADER include/spdk/notify.h 00:04:20.895 TEST_HEADER include/spdk/nvme.h 00:04:20.895 TEST_HEADER include/spdk/nvme_intel.h 00:04:20.895 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:20.895 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:20.895 TEST_HEADER include/spdk/nvme_spec.h 00:04:20.895 TEST_HEADER include/spdk/nvme_zns.h 00:04:20.895 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:20.895 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:20.895 TEST_HEADER include/spdk/nvmf.h 00:04:20.895 TEST_HEADER include/spdk/nvmf_spec.h 00:04:20.895 TEST_HEADER include/spdk/nvmf_transport.h 00:04:20.895 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:20.895 TEST_HEADER include/spdk/opal.h 00:04:20.895 TEST_HEADER include/spdk/opal_spec.h 00:04:20.895 TEST_HEADER include/spdk/pci_ids.h 00:04:20.895 TEST_HEADER include/spdk/pipe.h 00:04:20.895 TEST_HEADER include/spdk/queue.h 00:04:20.895 TEST_HEADER include/spdk/reduce.h 00:04:20.895 TEST_HEADER include/spdk/rpc.h 00:04:20.895 TEST_HEADER include/spdk/scheduler.h 00:04:20.895 TEST_HEADER include/spdk/scsi.h 00:04:20.895 TEST_HEADER include/spdk/scsi_spec.h 00:04:20.895 TEST_HEADER include/spdk/sock.h 00:04:20.896 TEST_HEADER include/spdk/stdinc.h 00:04:20.896 TEST_HEADER include/spdk/string.h 00:04:20.896 TEST_HEADER include/spdk/thread.h 00:04:20.896 TEST_HEADER include/spdk/trace.h 00:04:20.896 TEST_HEADER include/spdk/trace_parser.h 00:04:20.896 TEST_HEADER include/spdk/tree.h 00:04:20.896 TEST_HEADER include/spdk/ublk.h 00:04:20.896 TEST_HEADER include/spdk/util.h 00:04:20.896 TEST_HEADER include/spdk/uuid.h 00:04:20.896 TEST_HEADER include/spdk/version.h 00:04:20.896 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:20.896 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:20.896 TEST_HEADER include/spdk/vhost.h 00:04:20.896 TEST_HEADER include/spdk/vmd.h 00:04:20.896 TEST_HEADER include/spdk/xor.h 00:04:20.896 TEST_HEADER include/spdk/zipf.h 00:04:20.896 CXX test/cpp_headers/accel.o 00:04:20.896 CXX test/cpp_headers/accel_module.o 00:04:20.896 CC examples/accel/perf/accel_perf.o 00:04:21.154 CC examples/blob/hello_world/hello_blob.o 00:04:21.154 CC test/env/vtophys/vtophys.o 00:04:21.154 LINK mkfs 00:04:21.154 CC test/env/mem_callbacks/mem_callbacks.o 00:04:21.154 CXX test/cpp_headers/assert.o 00:04:21.154 LINK hello_fsdev 00:04:21.154 LINK vhost_fuzz 00:04:21.154 LINK vtophys 00:04:21.412 LINK hello_blob 00:04:21.412 CC test/event/event_perf/event_perf.o 00:04:21.412 CXX test/cpp_headers/barrier.o 00:04:21.412 CC test/event/reactor/reactor.o 00:04:21.412 CC test/event/reactor_perf/reactor_perf.o 00:04:21.412 LINK accel_perf 00:04:21.412 CC test/event/app_repeat/app_repeat.o 00:04:21.412 LINK event_perf 00:04:21.412 CXX test/cpp_headers/base64.o 00:04:21.670 CC test/event/scheduler/scheduler.o 00:04:21.670 LINK reactor 00:04:21.670 LINK reactor_perf 00:04:21.670 CXX test/cpp_headers/bdev.o 00:04:21.670 CXX test/cpp_headers/bdev_module.o 00:04:21.670 LINK app_repeat 00:04:21.670 CC examples/blob/cli/blobcli.o 00:04:21.670 LINK mem_callbacks 00:04:21.927 LINK scheduler 00:04:21.927 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:21.927 CC test/env/memory/memory_ut.o 00:04:21.927 CXX test/cpp_headers/bdev_zone.o 00:04:21.927 CC test/env/pci/pci_ut.o 00:04:21.927 CXX test/cpp_headers/bit_array.o 00:04:21.927 CXX test/cpp_headers/bit_pool.o 00:04:21.927 CXX test/cpp_headers/blob_bdev.o 00:04:21.927 LINK env_dpdk_post_init 00:04:22.186 CXX test/cpp_headers/blobfs_bdev.o 00:04:22.186 LINK blobcli 00:04:22.186 CC examples/nvme/hello_world/hello_world.o 00:04:22.186 CC examples/nvme/reconnect/reconnect.o 00:04:22.186 CXX test/cpp_headers/blobfs.o 00:04:22.186 CC examples/bdev/hello_world/hello_bdev.o 00:04:22.186 CC test/lvol/esnap/esnap.o 00:04:22.186 LINK pci_ut 00:04:22.444 CC test/nvme/aer/aer.o 00:04:22.444 LINK iscsi_fuzz 00:04:22.444 CXX test/cpp_headers/blob.o 00:04:22.444 LINK hello_world 00:04:22.444 CC test/nvme/reset/reset.o 00:04:22.444 CXX test/cpp_headers/conf.o 00:04:22.444 LINK hello_bdev 00:04:22.702 LINK reconnect 00:04:22.702 CXX test/cpp_headers/config.o 00:04:22.702 LINK aer 00:04:22.702 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.702 CXX test/cpp_headers/cpuset.o 00:04:22.702 CC examples/nvme/arbitration/arbitration.o 00:04:22.702 CC test/nvme/sgl/sgl.o 00:04:22.702 CXX test/cpp_headers/crc16.o 00:04:22.702 LINK reset 00:04:22.961 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.961 CC test/nvme/e2edp/nvme_dp.o 00:04:22.961 CXX test/cpp_headers/crc32.o 00:04:22.961 CC test/nvme/overhead/overhead.o 00:04:22.961 CC test/nvme/err_injection/err_injection.o 00:04:22.961 LINK sgl 00:04:23.219 LINK arbitration 00:04:23.219 LINK memory_ut 00:04:23.220 CXX test/cpp_headers/crc64.o 00:04:23.220 LINK nvme_manage 00:04:23.220 LINK nvme_dp 00:04:23.220 LINK err_injection 00:04:23.220 LINK overhead 00:04:23.220 CC test/nvme/startup/startup.o 00:04:23.220 CXX test/cpp_headers/dif.o 00:04:23.220 CC examples/nvme/hotplug/hotplug.o 00:04:23.478 CC test/rpc_client/rpc_client_test.o 00:04:23.478 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:23.478 CC test/nvme/reserve/reserve.o 00:04:23.478 CC examples/nvme/abort/abort.o 00:04:23.478 CXX test/cpp_headers/dma.o 00:04:23.478 LINK startup 00:04:23.478 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:23.478 LINK rpc_client_test 00:04:23.737 LINK hotplug 00:04:23.737 CXX test/cpp_headers/endian.o 00:04:23.737 LINK cmb_copy 00:04:23.737 LINK bdevperf 00:04:23.737 LINK reserve 00:04:23.737 LINK pmr_persistence 00:04:23.737 CC test/nvme/simple_copy/simple_copy.o 00:04:23.737 CC test/nvme/connect_stress/connect_stress.o 00:04:23.737 CXX test/cpp_headers/env_dpdk.o 00:04:23.995 LINK abort 00:04:23.995 CC test/accel/dif/dif.o 00:04:23.995 CC test/nvme/boot_partition/boot_partition.o 00:04:23.995 CC test/nvme/compliance/nvme_compliance.o 00:04:23.995 CC test/nvme/fused_ordering/fused_ordering.o 00:04:23.995 LINK simple_copy 00:04:23.995 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:23.995 CXX test/cpp_headers/env.o 00:04:23.995 LINK connect_stress 00:04:24.254 LINK boot_partition 00:04:24.254 CXX test/cpp_headers/event.o 00:04:24.254 LINK fused_ordering 00:04:24.254 CC examples/nvmf/nvmf/nvmf.o 00:04:24.254 CC test/nvme/fdp/fdp.o 00:04:24.254 LINK doorbell_aers 00:04:24.254 LINK nvme_compliance 00:04:24.254 CC test/nvme/cuse/cuse.o 00:04:24.254 CXX test/cpp_headers/fd_group.o 00:04:24.254 CXX test/cpp_headers/fd.o 00:04:24.512 CXX test/cpp_headers/file.o 00:04:24.512 CXX test/cpp_headers/fsdev.o 00:04:24.512 CXX test/cpp_headers/fsdev_module.o 00:04:24.512 CXX test/cpp_headers/ftl.o 00:04:24.512 CXX test/cpp_headers/fuse_dispatcher.o 00:04:24.512 CXX test/cpp_headers/gpt_spec.o 00:04:24.512 LINK nvmf 00:04:24.512 CXX test/cpp_headers/hexlify.o 00:04:24.512 LINK dif 00:04:24.512 LINK fdp 00:04:24.769 CXX test/cpp_headers/histogram_data.o 00:04:24.769 CXX test/cpp_headers/idxd.o 00:04:24.769 CXX test/cpp_headers/idxd_spec.o 00:04:24.769 CXX test/cpp_headers/init.o 00:04:24.769 CXX test/cpp_headers/ioat.o 00:04:24.769 CXX test/cpp_headers/ioat_spec.o 00:04:24.769 CXX test/cpp_headers/iscsi_spec.o 00:04:24.769 CXX test/cpp_headers/json.o 00:04:24.769 CXX test/cpp_headers/jsonrpc.o 00:04:24.769 CXX test/cpp_headers/keyring.o 00:04:25.027 CXX test/cpp_headers/keyring_module.o 00:04:25.027 CXX test/cpp_headers/likely.o 00:04:25.027 CXX test/cpp_headers/log.o 00:04:25.027 CXX test/cpp_headers/lvol.o 00:04:25.027 CXX test/cpp_headers/md5.o 00:04:25.027 CXX test/cpp_headers/memory.o 00:04:25.027 CXX test/cpp_headers/mmio.o 00:04:25.027 CC test/bdev/bdevio/bdevio.o 00:04:25.027 CXX test/cpp_headers/nbd.o 00:04:25.027 CXX test/cpp_headers/net.o 00:04:25.027 CXX test/cpp_headers/notify.o 00:04:25.027 CXX test/cpp_headers/nvme.o 00:04:25.285 CXX test/cpp_headers/nvme_intel.o 00:04:25.285 CXX test/cpp_headers/nvme_ocssd.o 00:04:25.285 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:25.285 CXX test/cpp_headers/nvme_spec.o 00:04:25.285 CXX test/cpp_headers/nvme_zns.o 00:04:25.285 CXX test/cpp_headers/nvmf_cmd.o 00:04:25.285 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:25.285 CXX test/cpp_headers/nvmf.o 00:04:25.285 CXX test/cpp_headers/nvmf_spec.o 00:04:25.544 CXX test/cpp_headers/nvmf_transport.o 00:04:25.544 CXX test/cpp_headers/opal.o 00:04:25.544 LINK bdevio 00:04:25.544 CXX test/cpp_headers/opal_spec.o 00:04:25.544 CXX test/cpp_headers/pci_ids.o 00:04:25.544 CXX test/cpp_headers/pipe.o 00:04:25.544 CXX test/cpp_headers/queue.o 00:04:25.544 CXX test/cpp_headers/reduce.o 00:04:25.544 CXX test/cpp_headers/rpc.o 00:04:25.544 CXX test/cpp_headers/scheduler.o 00:04:25.544 CXX test/cpp_headers/scsi.o 00:04:25.544 CXX test/cpp_headers/scsi_spec.o 00:04:25.544 CXX test/cpp_headers/sock.o 00:04:25.803 LINK cuse 00:04:25.803 CXX test/cpp_headers/stdinc.o 00:04:25.803 CXX test/cpp_headers/string.o 00:04:25.803 CXX test/cpp_headers/thread.o 00:04:25.803 CXX test/cpp_headers/trace.o 00:04:25.803 CXX test/cpp_headers/trace_parser.o 00:04:25.803 CXX test/cpp_headers/tree.o 00:04:25.803 CXX test/cpp_headers/ublk.o 00:04:25.803 CXX test/cpp_headers/util.o 00:04:25.803 CXX test/cpp_headers/uuid.o 00:04:25.803 CXX test/cpp_headers/version.o 00:04:26.062 CXX test/cpp_headers/vfio_user_pci.o 00:04:26.062 CXX test/cpp_headers/vfio_user_spec.o 00:04:26.062 CXX test/cpp_headers/vhost.o 00:04:26.062 CXX test/cpp_headers/vmd.o 00:04:26.062 CXX test/cpp_headers/xor.o 00:04:26.062 CXX test/cpp_headers/zipf.o 00:04:28.001 LINK esnap 00:04:28.260 00:04:28.260 real 1m29.676s 00:04:28.260 user 8m14.087s 00:04:28.260 sys 1m43.306s 00:04:28.260 13:23:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:28.260 13:23:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:28.260 ************************************ 00:04:28.260 END TEST make 00:04:28.260 ************************************ 00:04:28.260 13:23:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:28.260 13:23:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:28.260 13:23:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:28.260 13:23:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.260 13:23:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:28.260 13:23:40 -- pm/common@44 -- $ pid=5409 00:04:28.260 13:23:40 -- pm/common@50 -- $ kill -TERM 5409 00:04:28.260 13:23:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.260 13:23:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:28.260 13:23:40 -- pm/common@44 -- $ pid=5411 00:04:28.260 13:23:40 -- pm/common@50 -- $ kill -TERM 5411 00:04:28.260 13:23:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:28.260 13:23:40 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:28.260 13:23:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.260 13:23:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.260 13:23:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.519 13:23:40 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.519 13:23:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.519 13:23:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.519 13:23:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.519 13:23:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.519 13:23:40 -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.519 13:23:40 -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.519 13:23:40 -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.519 13:23:40 -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.519 13:23:40 -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.519 13:23:40 -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.519 13:23:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.519 13:23:40 -- scripts/common.sh@344 -- # case "$op" in 00:04:28.519 13:23:40 -- scripts/common.sh@345 -- # : 1 00:04:28.519 13:23:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.519 13:23:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.519 13:23:40 -- scripts/common.sh@365 -- # decimal 1 00:04:28.519 13:23:40 -- scripts/common.sh@353 -- # local d=1 00:04:28.519 13:23:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.519 13:23:40 -- scripts/common.sh@355 -- # echo 1 00:04:28.519 13:23:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.519 13:23:40 -- scripts/common.sh@366 -- # decimal 2 00:04:28.519 13:23:40 -- scripts/common.sh@353 -- # local d=2 00:04:28.519 13:23:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.519 13:23:40 -- scripts/common.sh@355 -- # echo 2 00:04:28.519 13:23:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.519 13:23:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.519 13:23:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.519 13:23:40 -- scripts/common.sh@368 -- # return 0 00:04:28.520 13:23:40 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.520 13:23:40 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.520 --rc genhtml_branch_coverage=1 00:04:28.520 --rc genhtml_function_coverage=1 00:04:28.520 --rc genhtml_legend=1 00:04:28.520 --rc geninfo_all_blocks=1 00:04:28.520 --rc geninfo_unexecuted_blocks=1 00:04:28.520 00:04:28.520 ' 00:04:28.520 13:23:40 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.520 --rc genhtml_branch_coverage=1 00:04:28.520 --rc genhtml_function_coverage=1 00:04:28.520 --rc genhtml_legend=1 00:04:28.520 --rc geninfo_all_blocks=1 00:04:28.520 --rc geninfo_unexecuted_blocks=1 00:04:28.520 00:04:28.520 ' 00:04:28.520 13:23:40 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.520 --rc genhtml_branch_coverage=1 00:04:28.520 --rc genhtml_function_coverage=1 00:04:28.520 --rc genhtml_legend=1 00:04:28.520 --rc geninfo_all_blocks=1 00:04:28.520 --rc geninfo_unexecuted_blocks=1 00:04:28.520 00:04:28.520 ' 00:04:28.520 13:23:40 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.520 --rc genhtml_branch_coverage=1 00:04:28.520 --rc genhtml_function_coverage=1 00:04:28.520 --rc genhtml_legend=1 00:04:28.520 --rc geninfo_all_blocks=1 00:04:28.520 --rc geninfo_unexecuted_blocks=1 00:04:28.520 00:04:28.520 ' 00:04:28.520 13:23:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:28.520 13:23:40 -- nvmf/common.sh@7 -- # uname -s 00:04:28.520 13:23:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.520 13:23:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.520 13:23:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.520 13:23:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.520 13:23:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.520 13:23:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.520 13:23:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.520 13:23:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.520 13:23:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.520 13:23:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.520 13:23:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:04:28.520 13:23:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:04:28.520 13:23:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.520 13:23:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.520 13:23:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:28.520 13:23:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.520 13:23:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:28.520 13:23:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.520 13:23:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.520 13:23:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.520 13:23:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.520 13:23:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.520 13:23:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.520 13:23:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.520 13:23:40 -- paths/export.sh@5 -- # export PATH 00:04:28.520 13:23:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.520 13:23:40 -- nvmf/common.sh@51 -- # : 0 00:04:28.520 13:23:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.520 13:23:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.520 13:23:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.520 13:23:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.520 13:23:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.520 13:23:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.520 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.520 13:23:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.520 13:23:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.520 13:23:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.520 13:23:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:28.520 13:23:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:28.520 13:23:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:28.520 13:23:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:28.520 13:23:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:28.520 13:23:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:28.520 13:23:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:28.520 13:23:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:28.520 13:23:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:28.520 13:23:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:28.520 13:23:40 -- spdk/autotest.sh@48 -- # udevadm_pid=54500 00:04:28.520 13:23:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:28.520 13:23:40 -- pm/common@17 -- # local monitor 00:04:28.520 13:23:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.520 13:23:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.520 13:23:40 -- pm/common@25 -- # sleep 1 00:04:28.520 13:23:40 -- pm/common@21 -- # date +%s 00:04:28.520 13:23:40 -- pm/common@21 -- # date +%s 00:04:28.520 13:23:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:28.520 13:23:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109020 00:04:28.520 13:23:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109020 00:04:28.520 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109020_collect-vmstat.pm.log 00:04:28.520 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109020_collect-cpu-load.pm.log 00:04:29.457 13:23:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:29.457 13:23:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:29.457 13:23:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.457 13:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:29.457 13:23:41 -- spdk/autotest.sh@59 -- # create_test_list 00:04:29.457 13:23:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:29.457 13:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:29.716 13:23:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:29.716 13:23:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:29.716 13:23:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:29.716 13:23:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:29.716 13:23:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:29.716 13:23:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:29.716 13:23:41 -- common/autotest_common.sh@1457 -- # uname 00:04:29.716 13:23:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:29.716 13:23:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:29.716 13:23:41 -- common/autotest_common.sh@1477 -- # uname 00:04:29.716 13:23:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:29.716 13:23:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:29.716 13:23:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:29.716 lcov: LCOV version 1.15 00:04:29.716 13:23:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:47.804 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.804 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:02.692 13:24:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:02.692 13:24:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.692 13:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:02.692 13:24:12 -- spdk/autotest.sh@78 -- # rm -f 00:05:02.692 13:24:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.692 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:02.692 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:02.692 13:24:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:02.692 13:24:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:02.692 13:24:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:02.692 13:24:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:02.692 13:24:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:02.692 13:24:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:02.692 13:24:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:02.692 13:24:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:02.692 13:24:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:02.692 13:24:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:02.692 13:24:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:02.692 13:24:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:02.692 13:24:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:02.692 13:24:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:02.692 13:24:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:02.692 13:24:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:02.692 13:24:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:02.692 13:24:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.692 13:24:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:02.692 13:24:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.692 13:24:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.692 13:24:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:02.692 13:24:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:02.692 13:24:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:02.692 No valid GPT data, bailing 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # pt= 00:05:02.692 13:24:13 -- scripts/common.sh@395 -- # return 1 00:05:02.692 13:24:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:02.692 1+0 records in 00:05:02.692 1+0 records out 00:05:02.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415135 s, 253 MB/s 00:05:02.692 13:24:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.692 13:24:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.692 13:24:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:02.692 13:24:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:02.692 13:24:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:02.692 No valid GPT data, bailing 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # pt= 00:05:02.692 13:24:13 -- scripts/common.sh@395 -- # return 1 00:05:02.692 13:24:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:02.692 1+0 records in 00:05:02.692 1+0 records out 00:05:02.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476509 s, 220 MB/s 00:05:02.692 13:24:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.692 13:24:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.692 13:24:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:02.692 13:24:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:02.692 13:24:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:02.692 No valid GPT data, bailing 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:02.692 13:24:13 -- scripts/common.sh@394 -- # pt= 00:05:02.692 13:24:13 -- scripts/common.sh@395 -- # return 1 00:05:02.692 13:24:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:02.692 1+0 records in 00:05:02.692 1+0 records out 00:05:02.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502542 s, 209 MB/s 00:05:02.692 13:24:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.692 13:24:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.693 13:24:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:02.693 13:24:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:02.693 13:24:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:02.693 No valid GPT data, bailing 00:05:02.693 13:24:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:02.693 13:24:13 -- scripts/common.sh@394 -- # pt= 00:05:02.693 13:24:13 -- scripts/common.sh@395 -- # return 1 00:05:02.693 13:24:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:02.693 1+0 records in 00:05:02.693 1+0 records out 00:05:02.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579936 s, 181 MB/s 00:05:02.693 13:24:13 -- spdk/autotest.sh@105 -- # sync 00:05:02.693 13:24:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:02.693 13:24:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:02.693 13:24:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.597 13:24:16 -- spdk/autotest.sh@111 -- # uname -s 00:05:04.597 13:24:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:04.597 13:24:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:04.597 13:24:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.856 Hugepages 00:05:04.856 node hugesize free / total 00:05:04.856 node0 1048576kB 0 / 0 00:05:04.856 node0 2048kB 0 / 0 00:05:04.856 00:05:04.856 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.115 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:05.115 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:05.115 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:05.115 13:24:16 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.115 13:24:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.115 13:24:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.115 13:24:16 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.052 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.052 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.052 13:24:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:06.989 13:24:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:06.990 13:24:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:06.990 13:24:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.990 13:24:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:06.990 13:24:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:06.990 13:24:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:06.990 13:24:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.990 13:24:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.990 13:24:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:07.248 13:24:18 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:07.248 13:24:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:07.248 13:24:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.507 Waiting for block devices as requested 00:05:07.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.507 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.766 13:24:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:07.766 13:24:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:07.766 13:24:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:07.766 13:24:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:07.766 13:24:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1543 -- # continue 00:05:07.766 13:24:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:07.766 13:24:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:07.766 13:24:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:07.766 13:24:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:07.766 13:24:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:07.766 13:24:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.766 13:24:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:07.766 13:24:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:07.766 13:24:19 -- common/autotest_common.sh@1543 -- # continue 00:05:07.766 13:24:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:07.766 13:24:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.766 13:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:07.766 13:24:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:07.766 13:24:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.766 13:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:07.766 13:24:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.592 13:24:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:08.592 13:24:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.592 13:24:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 13:24:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:08.592 13:24:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:08.592 13:24:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.592 13:24:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:08.592 13:24:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:08.592 13:24:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:08.592 13:24:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:08.592 13:24:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:08.592 13:24:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.592 13:24:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.592 13:24:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.592 13:24:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.592 13:24:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:08.850 13:24:20 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:08.850 13:24:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:08.850 13:24:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.850 13:24:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:08.850 13:24:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:08.850 13:24:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.850 13:24:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.850 13:24:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:08.850 13:24:20 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:08.850 13:24:20 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.850 13:24:20 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:08.850 13:24:20 -- common/autotest_common.sh@1572 -- # return 0 00:05:08.850 13:24:20 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:08.850 13:24:20 -- common/autotest_common.sh@1580 -- # return 0 00:05:08.850 13:24:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:08.850 13:24:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:08.850 13:24:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:08.850 13:24:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:08.850 13:24:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:08.850 13:24:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.850 13:24:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.850 13:24:20 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:08.850 13:24:20 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:08.850 13:24:20 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:08.850 13:24:20 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:08.850 13:24:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.850 13:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.850 13:24:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.850 ************************************ 00:05:08.850 START TEST env 00:05:08.850 ************************************ 00:05:08.850 13:24:20 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:08.850 * Looking for test storage... 00:05:08.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:08.850 13:24:20 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.850 13:24:20 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.850 13:24:20 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.109 13:24:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.109 13:24:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.109 13:24:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.109 13:24:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.109 13:24:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.109 13:24:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.109 13:24:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.109 13:24:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.109 13:24:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.109 13:24:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.109 13:24:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.109 13:24:20 env -- scripts/common.sh@344 -- # case "$op" in 00:05:09.109 13:24:20 env -- scripts/common.sh@345 -- # : 1 00:05:09.109 13:24:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.109 13:24:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.109 13:24:20 env -- scripts/common.sh@365 -- # decimal 1 00:05:09.109 13:24:20 env -- scripts/common.sh@353 -- # local d=1 00:05:09.109 13:24:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.109 13:24:20 env -- scripts/common.sh@355 -- # echo 1 00:05:09.109 13:24:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.109 13:24:20 env -- scripts/common.sh@366 -- # decimal 2 00:05:09.109 13:24:20 env -- scripts/common.sh@353 -- # local d=2 00:05:09.109 13:24:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.109 13:24:20 env -- scripts/common.sh@355 -- # echo 2 00:05:09.109 13:24:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.109 13:24:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.109 13:24:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.109 13:24:20 env -- scripts/common.sh@368 -- # return 0 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.109 --rc genhtml_branch_coverage=1 00:05:09.109 --rc genhtml_function_coverage=1 00:05:09.109 --rc genhtml_legend=1 00:05:09.109 --rc geninfo_all_blocks=1 00:05:09.109 --rc geninfo_unexecuted_blocks=1 00:05:09.109 00:05:09.109 ' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.109 --rc genhtml_branch_coverage=1 00:05:09.109 --rc genhtml_function_coverage=1 00:05:09.109 --rc genhtml_legend=1 00:05:09.109 --rc geninfo_all_blocks=1 00:05:09.109 --rc geninfo_unexecuted_blocks=1 00:05:09.109 00:05:09.109 ' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.109 --rc genhtml_branch_coverage=1 00:05:09.109 --rc genhtml_function_coverage=1 00:05:09.109 --rc genhtml_legend=1 00:05:09.109 --rc geninfo_all_blocks=1 00:05:09.109 --rc geninfo_unexecuted_blocks=1 00:05:09.109 00:05:09.109 ' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.109 --rc genhtml_branch_coverage=1 00:05:09.109 --rc genhtml_function_coverage=1 00:05:09.109 --rc genhtml_legend=1 00:05:09.109 --rc geninfo_all_blocks=1 00:05:09.109 --rc geninfo_unexecuted_blocks=1 00:05:09.109 00:05:09.109 ' 00:05:09.109 13:24:20 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.109 13:24:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.109 13:24:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.109 ************************************ 00:05:09.109 START TEST env_memory 00:05:09.109 ************************************ 00:05:09.109 13:24:20 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:09.109 00:05:09.109 00:05:09.109 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.109 http://cunit.sourceforge.net/ 00:05:09.109 00:05:09.109 00:05:09.109 Suite: memory 00:05:09.109 Test: alloc and free memory map ...[2024-11-20 13:24:20.886662] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.109 passed 00:05:09.110 Test: mem map translation ...[2024-11-20 13:24:20.918954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.110 [2024-11-20 13:24:20.919008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.110 [2024-11-20 13:24:20.919066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.110 [2024-11-20 13:24:20.919076] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.110 passed 00:05:09.110 Test: mem map registration ...[2024-11-20 13:24:20.983982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:09.110 [2024-11-20 13:24:20.984037] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:09.110 passed 00:05:09.369 Test: mem map adjacent registrations ...passed 00:05:09.369 00:05:09.369 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.369 suites 1 1 n/a 0 0 00:05:09.369 tests 4 4 4 0 0 00:05:09.369 asserts 152 152 152 0 n/a 00:05:09.369 00:05:09.369 Elapsed time = 0.220 seconds 00:05:09.369 00:05:09.369 real 0m0.236s 00:05:09.369 user 0m0.218s 00:05:09.369 sys 0m0.015s 00:05:09.369 13:24:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.369 ************************************ 00:05:09.369 END TEST env_memory 00:05:09.369 ************************************ 00:05:09.369 13:24:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:09.369 13:24:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:09.369 13:24:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.369 13:24:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.369 13:24:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.369 ************************************ 00:05:09.369 START TEST env_vtophys 00:05:09.369 ************************************ 00:05:09.369 13:24:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:09.369 EAL: lib.eal log level changed from notice to debug 00:05:09.369 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 1 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 2 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 3 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 4 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 5 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 6 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 7 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 8 as core 0 on socket 0 00:05:09.369 EAL: Detected lcore 9 as core 0 on socket 0 00:05:09.369 EAL: Maximum logical cores by configuration: 128 00:05:09.369 EAL: Detected CPU lcores: 10 00:05:09.369 EAL: Detected NUMA nodes: 1 00:05:09.369 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:09.369 EAL: Detected shared linkage of DPDK 00:05:09.369 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.369 EAL: Selected IOVA mode 'PA' 00:05:09.369 EAL: Probing VFIO support... 00:05:09.369 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:09.369 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:09.369 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.369 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.369 EAL: Setting up physically contiguous memory... 00:05:09.369 EAL: Setting maximum number of open files to 524288 00:05:09.369 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.369 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.369 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.369 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.369 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.369 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.369 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.369 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.369 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.369 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.369 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.369 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.369 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.369 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.369 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.369 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.369 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.369 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.369 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.369 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.369 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.369 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.369 EAL: Hugepages will be freed exactly as allocated. 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: TSC frequency is ~2200000 KHz 00:05:09.369 EAL: Main lcore 0 is ready (tid=7f1446411a00;cpuset=[0]) 00:05:09.369 EAL: Trying to obtain current memory policy. 00:05:09.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.369 EAL: Restoring previous memory policy: 0 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.369 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:09.369 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.369 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.369 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:09.369 00:05:09.369 00:05:09.369 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.369 http://cunit.sourceforge.net/ 00:05:09.369 00:05:09.369 00:05:09.369 Suite: components_suite 00:05:09.369 Test: vtophys_malloc_test ...passed 00:05:09.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.369 EAL: Restoring previous memory policy: 4 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.369 EAL: Trying to obtain current memory policy. 00:05:09.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.369 EAL: Restoring previous memory policy: 4 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.369 EAL: Trying to obtain current memory policy. 00:05:09.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.369 EAL: Restoring previous memory policy: 4 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.369 EAL: Trying to obtain current memory policy. 00:05:09.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.369 EAL: Restoring previous memory policy: 4 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.369 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.369 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.369 EAL: request: mp_malloc_sync 00:05:09.369 EAL: No shared files mode enabled, IPC is disabled 00:05:09.370 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.370 EAL: Trying to obtain current memory policy. 00:05:09.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.370 EAL: Restoring previous memory policy: 4 00:05:09.370 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.370 EAL: request: mp_malloc_sync 00:05:09.370 EAL: No shared files mode enabled, IPC is disabled 00:05:09.370 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.629 EAL: Trying to obtain current memory policy. 00:05:09.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.629 EAL: Restoring previous memory policy: 4 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.629 EAL: Trying to obtain current memory policy. 00:05:09.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.629 EAL: Restoring previous memory policy: 4 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.629 EAL: Trying to obtain current memory policy. 00:05:09.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.629 EAL: Restoring previous memory policy: 4 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.629 EAL: request: mp_malloc_sync 00:05:09.629 EAL: No shared files mode enabled, IPC is disabled 00:05:09.629 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.889 EAL: request: mp_malloc_sync 00:05:09.889 EAL: No shared files mode enabled, IPC is disabled 00:05:09.889 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.889 EAL: Trying to obtain current memory policy. 00:05:09.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.889 EAL: Restoring previous memory policy: 4 00:05:09.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.889 EAL: request: mp_malloc_sync 00:05:09.889 EAL: No shared files mode enabled, IPC is disabled 00:05:09.889 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.148 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.148 EAL: request: mp_malloc_sync 00:05:10.148 EAL: No shared files mode enabled, IPC is disabled 00:05:10.148 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.148 EAL: Trying to obtain current memory policy. 00:05:10.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.407 EAL: Restoring previous memory policy: 4 00:05:10.407 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.407 EAL: request: mp_malloc_sync 00:05:10.407 EAL: No shared files mode enabled, IPC is disabled 00:05:10.407 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.923 passed 00:05:10.923 00:05:10.923 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.923 suites 1 1 n/a 0 0 00:05:10.923 tests 2 2 2 0 0 00:05:10.923 asserts 5617 5617 5617 0 n/a 00:05:10.923 00:05:10.923 Elapsed time = 1.372 seconds 00:05:10.923 EAL: request: mp_malloc_sync 00:05:10.923 EAL: No shared files mode enabled, IPC is disabled 00:05:10.923 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.923 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.923 EAL: request: mp_malloc_sync 00:05:10.923 EAL: No shared files mode enabled, IPC is disabled 00:05:10.923 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.923 EAL: No shared files mode enabled, IPC is disabled 00:05:10.923 EAL: No shared files mode enabled, IPC is disabled 00:05:10.923 EAL: No shared files mode enabled, IPC is disabled 00:05:10.923 00:05:10.923 real 0m1.583s 00:05:10.923 user 0m0.874s 00:05:10.923 sys 0m0.574s 00:05:10.923 13:24:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.923 ************************************ 00:05:10.923 END TEST env_vtophys 00:05:10.923 ************************************ 00:05:10.923 13:24:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.923 13:24:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.923 13:24:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.923 13:24:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.923 13:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.923 ************************************ 00:05:10.923 START TEST env_pci 00:05:10.923 ************************************ 00:05:10.923 13:24:22 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.923 00:05:10.923 00:05:10.923 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.923 http://cunit.sourceforge.net/ 00:05:10.923 00:05:10.923 00:05:10.923 Suite: pci 00:05:10.923 Test: pci_hook ...[2024-11-20 13:24:22.775465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56733 has claimed it 00:05:10.923 EAL: Cannot find device (10000:00:01.0) 00:05:10.923 EAL: Failed to attach device on primary process 00:05:10.923 passed 00:05:10.923 00:05:10.923 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.923 suites 1 1 n/a 0 0 00:05:10.923 tests 1 1 1 0 0 00:05:10.923 asserts 25 25 25 0 n/a 00:05:10.923 00:05:10.923 Elapsed time = 0.002 seconds 00:05:10.923 00:05:10.923 real 0m0.020s 00:05:10.923 user 0m0.008s 00:05:10.923 sys 0m0.011s 00:05:10.923 13:24:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.923 ************************************ 00:05:10.923 END TEST env_pci 00:05:10.923 ************************************ 00:05:10.923 13:24:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:10.923 13:24:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.923 13:24:22 env -- env/env.sh@15 -- # uname 00:05:10.923 13:24:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.923 13:24:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.924 13:24:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.924 13:24:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:10.924 13:24:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.924 13:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.924 ************************************ 00:05:10.924 START TEST env_dpdk_post_init 00:05:10.924 ************************************ 00:05:10.924 13:24:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.924 EAL: Detected CPU lcores: 10 00:05:10.924 EAL: Detected NUMA nodes: 1 00:05:10.924 EAL: Detected shared linkage of DPDK 00:05:10.924 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.924 EAL: Selected IOVA mode 'PA' 00:05:11.182 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.182 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:11.182 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:11.182 Starting DPDK initialization... 00:05:11.182 Starting SPDK post initialization... 00:05:11.182 SPDK NVMe probe 00:05:11.182 Attaching to 0000:00:10.0 00:05:11.182 Attaching to 0000:00:11.0 00:05:11.182 Attached to 0000:00:10.0 00:05:11.182 Attached to 0000:00:11.0 00:05:11.182 Cleaning up... 00:05:11.182 00:05:11.182 real 0m0.184s 00:05:11.182 user 0m0.047s 00:05:11.182 sys 0m0.038s 00:05:11.182 13:24:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.182 13:24:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.182 ************************************ 00:05:11.182 END TEST env_dpdk_post_init 00:05:11.182 ************************************ 00:05:11.182 13:24:23 env -- env/env.sh@26 -- # uname 00:05:11.182 13:24:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.182 13:24:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.182 13:24:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.182 13:24:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.182 13:24:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.182 ************************************ 00:05:11.182 START TEST env_mem_callbacks 00:05:11.182 ************************************ 00:05:11.182 13:24:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.182 EAL: Detected CPU lcores: 10 00:05:11.182 EAL: Detected NUMA nodes: 1 00:05:11.182 EAL: Detected shared linkage of DPDK 00:05:11.182 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.182 EAL: Selected IOVA mode 'PA' 00:05:11.440 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.440 00:05:11.440 00:05:11.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.440 http://cunit.sourceforge.net/ 00:05:11.440 00:05:11.440 00:05:11.440 Suite: memory 00:05:11.440 Test: test ... 00:05:11.440 register 0x200000200000 2097152 00:05:11.440 malloc 3145728 00:05:11.440 register 0x200000400000 4194304 00:05:11.440 buf 0x200000500000 len 3145728 PASSED 00:05:11.440 malloc 64 00:05:11.440 buf 0x2000004fff40 len 64 PASSED 00:05:11.440 malloc 4194304 00:05:11.440 register 0x200000800000 6291456 00:05:11.440 buf 0x200000a00000 len 4194304 PASSED 00:05:11.440 free 0x200000500000 3145728 00:05:11.440 free 0x2000004fff40 64 00:05:11.440 unregister 0x200000400000 4194304 PASSED 00:05:11.440 free 0x200000a00000 4194304 00:05:11.440 unregister 0x200000800000 6291456 PASSED 00:05:11.440 malloc 8388608 00:05:11.440 register 0x200000400000 10485760 00:05:11.440 buf 0x200000600000 len 8388608 PASSED 00:05:11.440 free 0x200000600000 8388608 00:05:11.440 unregister 0x200000400000 10485760 PASSED 00:05:11.440 passed 00:05:11.440 00:05:11.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.440 suites 1 1 n/a 0 0 00:05:11.440 tests 1 1 1 0 0 00:05:11.440 asserts 15 15 15 0 n/a 00:05:11.440 00:05:11.440 Elapsed time = 0.007 seconds 00:05:11.440 00:05:11.440 real 0m0.138s 00:05:11.440 user 0m0.013s 00:05:11.440 sys 0m0.024s 00:05:11.440 13:24:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.440 13:24:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.440 ************************************ 00:05:11.440 END TEST env_mem_callbacks 00:05:11.440 ************************************ 00:05:11.440 00:05:11.440 real 0m2.627s 00:05:11.440 user 0m1.353s 00:05:11.440 sys 0m0.920s 00:05:11.440 13:24:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.440 ************************************ 00:05:11.440 13:24:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.440 END TEST env 00:05:11.440 ************************************ 00:05:11.440 13:24:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.440 13:24:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.440 13:24:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.440 13:24:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.440 ************************************ 00:05:11.440 START TEST rpc 00:05:11.440 ************************************ 00:05:11.440 13:24:23 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.440 * Looking for test storage... 00:05:11.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.440 13:24:23 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.440 13:24:23 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.440 13:24:23 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.699 13:24:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.699 13:24:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.699 13:24:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.699 13:24:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.699 13:24:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.699 13:24:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.699 13:24:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.699 13:24:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.699 13:24:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.699 13:24:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.699 13:24:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.699 13:24:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.699 13:24:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.699 13:24:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.699 13:24:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.699 --rc genhtml_branch_coverage=1 00:05:11.699 --rc genhtml_function_coverage=1 00:05:11.699 --rc genhtml_legend=1 00:05:11.699 --rc geninfo_all_blocks=1 00:05:11.699 --rc geninfo_unexecuted_blocks=1 00:05:11.699 00:05:11.699 ' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.699 --rc genhtml_branch_coverage=1 00:05:11.699 --rc genhtml_function_coverage=1 00:05:11.699 --rc genhtml_legend=1 00:05:11.699 --rc geninfo_all_blocks=1 00:05:11.699 --rc geninfo_unexecuted_blocks=1 00:05:11.699 00:05:11.699 ' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.699 --rc genhtml_branch_coverage=1 00:05:11.699 --rc genhtml_function_coverage=1 00:05:11.699 --rc genhtml_legend=1 00:05:11.699 --rc geninfo_all_blocks=1 00:05:11.699 --rc geninfo_unexecuted_blocks=1 00:05:11.699 00:05:11.699 ' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.699 --rc genhtml_branch_coverage=1 00:05:11.699 --rc genhtml_function_coverage=1 00:05:11.699 --rc genhtml_legend=1 00:05:11.699 --rc geninfo_all_blocks=1 00:05:11.699 --rc geninfo_unexecuted_blocks=1 00:05:11.699 00:05:11.699 ' 00:05:11.699 13:24:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56851 00:05:11.699 13:24:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.699 13:24:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.699 13:24:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56851 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 56851 ']' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.699 13:24:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.699 [2024-11-20 13:24:23.569336] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:11.699 [2024-11-20 13:24:23.569447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56851 ] 00:05:11.958 [2024-11-20 13:24:23.722739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.958 [2024-11-20 13:24:23.778451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.958 [2024-11-20 13:24:23.778509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56851' to capture a snapshot of events at runtime. 00:05:11.958 [2024-11-20 13:24:23.778524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.958 [2024-11-20 13:24:23.778536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.958 [2024-11-20 13:24:23.778545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56851 for offline analysis/debug. 00:05:11.958 [2024-11-20 13:24:23.779000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.958 [2024-11-20 13:24:23.854862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.217 13:24:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.217 13:24:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.217 13:24:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.217 13:24:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.217 13:24:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.217 13:24:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.217 13:24:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.217 13:24:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.217 13:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.217 ************************************ 00:05:12.217 START TEST rpc_integrity 00:05:12.217 ************************************ 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:12.217 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.217 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.217 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.217 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.217 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.217 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.218 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.218 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.218 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.218 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.218 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.476 { 00:05:12.476 "name": "Malloc0", 00:05:12.476 "aliases": [ 00:05:12.476 "e116f33a-5e3e-4409-8258-43355de218ca" 00:05:12.476 ], 00:05:12.476 "product_name": "Malloc disk", 00:05:12.476 "block_size": 512, 00:05:12.476 "num_blocks": 16384, 00:05:12.476 "uuid": "e116f33a-5e3e-4409-8258-43355de218ca", 00:05:12.476 "assigned_rate_limits": { 00:05:12.476 "rw_ios_per_sec": 0, 00:05:12.476 "rw_mbytes_per_sec": 0, 00:05:12.476 "r_mbytes_per_sec": 0, 00:05:12.476 "w_mbytes_per_sec": 0 00:05:12.476 }, 00:05:12.476 "claimed": false, 00:05:12.476 "zoned": false, 00:05:12.476 "supported_io_types": { 00:05:12.476 "read": true, 00:05:12.476 "write": true, 00:05:12.476 "unmap": true, 00:05:12.476 "flush": true, 00:05:12.476 "reset": true, 00:05:12.476 "nvme_admin": false, 00:05:12.476 "nvme_io": false, 00:05:12.476 "nvme_io_md": false, 00:05:12.476 "write_zeroes": true, 00:05:12.476 "zcopy": true, 00:05:12.476 "get_zone_info": false, 00:05:12.476 "zone_management": false, 00:05:12.476 "zone_append": false, 00:05:12.476 "compare": false, 00:05:12.476 "compare_and_write": false, 00:05:12.476 "abort": true, 00:05:12.476 "seek_hole": false, 00:05:12.476 "seek_data": false, 00:05:12.476 "copy": true, 00:05:12.476 "nvme_iov_md": false 00:05:12.476 }, 00:05:12.476 "memory_domains": [ 00:05:12.476 { 00:05:12.476 "dma_device_id": "system", 00:05:12.476 "dma_device_type": 1 00:05:12.476 }, 00:05:12.476 { 00:05:12.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.476 "dma_device_type": 2 00:05:12.476 } 00:05:12.476 ], 00:05:12.476 "driver_specific": {} 00:05:12.476 } 00:05:12.476 ]' 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.476 [2024-11-20 13:24:24.232963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.476 [2024-11-20 13:24:24.233032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.476 [2024-11-20 13:24:24.233080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x528050 00:05:12.476 [2024-11-20 13:24:24.233088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.476 [2024-11-20 13:24:24.234761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.476 [2024-11-20 13:24:24.234807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.476 Passthru0 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.476 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.476 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.476 { 00:05:12.476 "name": "Malloc0", 00:05:12.476 "aliases": [ 00:05:12.476 "e116f33a-5e3e-4409-8258-43355de218ca" 00:05:12.476 ], 00:05:12.476 "product_name": "Malloc disk", 00:05:12.476 "block_size": 512, 00:05:12.476 "num_blocks": 16384, 00:05:12.476 "uuid": "e116f33a-5e3e-4409-8258-43355de218ca", 00:05:12.476 "assigned_rate_limits": { 00:05:12.476 "rw_ios_per_sec": 0, 00:05:12.476 "rw_mbytes_per_sec": 0, 00:05:12.476 "r_mbytes_per_sec": 0, 00:05:12.476 "w_mbytes_per_sec": 0 00:05:12.477 }, 00:05:12.477 "claimed": true, 00:05:12.477 "claim_type": "exclusive_write", 00:05:12.477 "zoned": false, 00:05:12.477 "supported_io_types": { 00:05:12.477 "read": true, 00:05:12.477 "write": true, 00:05:12.477 "unmap": true, 00:05:12.477 "flush": true, 00:05:12.477 "reset": true, 00:05:12.477 "nvme_admin": false, 00:05:12.477 "nvme_io": false, 00:05:12.477 "nvme_io_md": false, 00:05:12.477 "write_zeroes": true, 00:05:12.477 "zcopy": true, 00:05:12.477 "get_zone_info": false, 00:05:12.477 "zone_management": false, 00:05:12.477 "zone_append": false, 00:05:12.477 "compare": false, 00:05:12.477 "compare_and_write": false, 00:05:12.477 "abort": true, 00:05:12.477 "seek_hole": false, 00:05:12.477 "seek_data": false, 00:05:12.477 "copy": true, 00:05:12.477 "nvme_iov_md": false 00:05:12.477 }, 00:05:12.477 "memory_domains": [ 00:05:12.477 { 00:05:12.477 "dma_device_id": "system", 00:05:12.477 "dma_device_type": 1 00:05:12.477 }, 00:05:12.477 { 00:05:12.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.477 "dma_device_type": 2 00:05:12.477 } 00:05:12.477 ], 00:05:12.477 "driver_specific": {} 00:05:12.477 }, 00:05:12.477 { 00:05:12.477 "name": "Passthru0", 00:05:12.477 "aliases": [ 00:05:12.477 "6cfa8f3b-bc45-5cd8-8e44-0518ed24e5cc" 00:05:12.477 ], 00:05:12.477 "product_name": "passthru", 00:05:12.477 "block_size": 512, 00:05:12.477 "num_blocks": 16384, 00:05:12.477 "uuid": "6cfa8f3b-bc45-5cd8-8e44-0518ed24e5cc", 00:05:12.477 "assigned_rate_limits": { 00:05:12.477 "rw_ios_per_sec": 0, 00:05:12.477 "rw_mbytes_per_sec": 0, 00:05:12.477 "r_mbytes_per_sec": 0, 00:05:12.477 "w_mbytes_per_sec": 0 00:05:12.477 }, 00:05:12.477 "claimed": false, 00:05:12.477 "zoned": false, 00:05:12.477 "supported_io_types": { 00:05:12.477 "read": true, 00:05:12.477 "write": true, 00:05:12.477 "unmap": true, 00:05:12.477 "flush": true, 00:05:12.477 "reset": true, 00:05:12.477 "nvme_admin": false, 00:05:12.477 "nvme_io": false, 00:05:12.477 "nvme_io_md": false, 00:05:12.477 "write_zeroes": true, 00:05:12.477 "zcopy": true, 00:05:12.477 "get_zone_info": false, 00:05:12.477 "zone_management": false, 00:05:12.477 "zone_append": false, 00:05:12.477 "compare": false, 00:05:12.477 "compare_and_write": false, 00:05:12.477 "abort": true, 00:05:12.477 "seek_hole": false, 00:05:12.477 "seek_data": false, 00:05:12.477 "copy": true, 00:05:12.477 "nvme_iov_md": false 00:05:12.477 }, 00:05:12.477 "memory_domains": [ 00:05:12.477 { 00:05:12.477 "dma_device_id": "system", 00:05:12.477 "dma_device_type": 1 00:05:12.477 }, 00:05:12.477 { 00:05:12.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.477 "dma_device_type": 2 00:05:12.477 } 00:05:12.477 ], 00:05:12.477 "driver_specific": { 00:05:12.477 "passthru": { 00:05:12.477 "name": "Passthru0", 00:05:12.477 "base_bdev_name": "Malloc0" 00:05:12.477 } 00:05:12.477 } 00:05:12.477 } 00:05:12.477 ]' 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.477 13:24:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.477 00:05:12.477 real 0m0.318s 00:05:12.477 user 0m0.221s 00:05:12.477 sys 0m0.035s 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.477 13:24:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 ************************************ 00:05:12.477 END TEST rpc_integrity 00:05:12.477 ************************************ 00:05:12.736 13:24:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 ************************************ 00:05:12.736 START TEST rpc_plugins 00:05:12.736 ************************************ 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.736 { 00:05:12.736 "name": "Malloc1", 00:05:12.736 "aliases": [ 00:05:12.736 "d12a2e0c-875b-4860-9153-a6683f860aca" 00:05:12.736 ], 00:05:12.736 "product_name": "Malloc disk", 00:05:12.736 "block_size": 4096, 00:05:12.736 "num_blocks": 256, 00:05:12.736 "uuid": "d12a2e0c-875b-4860-9153-a6683f860aca", 00:05:12.736 "assigned_rate_limits": { 00:05:12.736 "rw_ios_per_sec": 0, 00:05:12.736 "rw_mbytes_per_sec": 0, 00:05:12.736 "r_mbytes_per_sec": 0, 00:05:12.736 "w_mbytes_per_sec": 0 00:05:12.736 }, 00:05:12.736 "claimed": false, 00:05:12.736 "zoned": false, 00:05:12.736 "supported_io_types": { 00:05:12.736 "read": true, 00:05:12.736 "write": true, 00:05:12.736 "unmap": true, 00:05:12.736 "flush": true, 00:05:12.736 "reset": true, 00:05:12.736 "nvme_admin": false, 00:05:12.736 "nvme_io": false, 00:05:12.736 "nvme_io_md": false, 00:05:12.736 "write_zeroes": true, 00:05:12.736 "zcopy": true, 00:05:12.736 "get_zone_info": false, 00:05:12.736 "zone_management": false, 00:05:12.736 "zone_append": false, 00:05:12.736 "compare": false, 00:05:12.736 "compare_and_write": false, 00:05:12.736 "abort": true, 00:05:12.736 "seek_hole": false, 00:05:12.736 "seek_data": false, 00:05:12.736 "copy": true, 00:05:12.736 "nvme_iov_md": false 00:05:12.736 }, 00:05:12.736 "memory_domains": [ 00:05:12.736 { 00:05:12.736 "dma_device_id": "system", 00:05:12.736 "dma_device_type": 1 00:05:12.736 }, 00:05:12.736 { 00:05:12.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.736 "dma_device_type": 2 00:05:12.736 } 00:05:12.736 ], 00:05:12.736 "driver_specific": {} 00:05:12.736 } 00:05:12.736 ]' 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.736 13:24:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.736 00:05:12.736 real 0m0.158s 00:05:12.736 user 0m0.101s 00:05:12.736 sys 0m0.021s 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.736 ************************************ 00:05:12.736 END TEST rpc_plugins 00:05:12.736 13:24:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 ************************************ 00:05:12.736 13:24:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.736 13:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 ************************************ 00:05:12.736 START TEST rpc_trace_cmd_test 00:05:12.736 ************************************ 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.736 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.736 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56851", 00:05:12.736 "tpoint_group_mask": "0x8", 00:05:12.736 "iscsi_conn": { 00:05:12.736 "mask": "0x2", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "scsi": { 00:05:12.736 "mask": "0x4", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "bdev": { 00:05:12.736 "mask": "0x8", 00:05:12.736 "tpoint_mask": "0xffffffffffffffff" 00:05:12.736 }, 00:05:12.736 "nvmf_rdma": { 00:05:12.736 "mask": "0x10", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "nvmf_tcp": { 00:05:12.736 "mask": "0x20", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "ftl": { 00:05:12.736 "mask": "0x40", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "blobfs": { 00:05:12.736 "mask": "0x80", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "dsa": { 00:05:12.736 "mask": "0x200", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "thread": { 00:05:12.736 "mask": "0x400", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "nvme_pcie": { 00:05:12.736 "mask": "0x800", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "iaa": { 00:05:12.736 "mask": "0x1000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "nvme_tcp": { 00:05:12.736 "mask": "0x2000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "bdev_nvme": { 00:05:12.736 "mask": "0x4000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "sock": { 00:05:12.736 "mask": "0x8000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "blob": { 00:05:12.736 "mask": "0x10000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "bdev_raid": { 00:05:12.736 "mask": "0x20000", 00:05:12.736 "tpoint_mask": "0x0" 00:05:12.736 }, 00:05:12.736 "scheduler": { 00:05:12.736 "mask": "0x40000", 00:05:12.737 "tpoint_mask": "0x0" 00:05:12.737 } 00:05:12.737 }' 00:05:12.737 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.995 00:05:12.995 real 0m0.276s 00:05:12.995 user 0m0.244s 00:05:12.995 sys 0m0.023s 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.995 13:24:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.995 ************************************ 00:05:12.995 END TEST rpc_trace_cmd_test 00:05:12.995 ************************************ 00:05:13.253 13:24:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.253 13:24:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.253 13:24:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.253 13:24:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.253 13:24:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.253 13:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.253 ************************************ 00:05:13.253 START TEST rpc_daemon_integrity 00:05:13.253 ************************************ 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.253 13:24:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.253 { 00:05:13.253 "name": "Malloc2", 00:05:13.253 "aliases": [ 00:05:13.253 "f2d6865a-15ca-4058-a5ce-e88b8281bfaa" 00:05:13.253 ], 00:05:13.253 "product_name": "Malloc disk", 00:05:13.253 "block_size": 512, 00:05:13.253 "num_blocks": 16384, 00:05:13.253 "uuid": "f2d6865a-15ca-4058-a5ce-e88b8281bfaa", 00:05:13.253 "assigned_rate_limits": { 00:05:13.253 "rw_ios_per_sec": 0, 00:05:13.253 "rw_mbytes_per_sec": 0, 00:05:13.253 "r_mbytes_per_sec": 0, 00:05:13.253 "w_mbytes_per_sec": 0 00:05:13.253 }, 00:05:13.253 "claimed": false, 00:05:13.253 "zoned": false, 00:05:13.253 "supported_io_types": { 00:05:13.253 "read": true, 00:05:13.253 "write": true, 00:05:13.253 "unmap": true, 00:05:13.253 "flush": true, 00:05:13.253 "reset": true, 00:05:13.253 "nvme_admin": false, 00:05:13.253 "nvme_io": false, 00:05:13.253 "nvme_io_md": false, 00:05:13.253 "write_zeroes": true, 00:05:13.253 "zcopy": true, 00:05:13.253 "get_zone_info": false, 00:05:13.253 "zone_management": false, 00:05:13.253 "zone_append": false, 00:05:13.253 "compare": false, 00:05:13.253 "compare_and_write": false, 00:05:13.253 "abort": true, 00:05:13.253 "seek_hole": false, 00:05:13.253 "seek_data": false, 00:05:13.253 "copy": true, 00:05:13.253 "nvme_iov_md": false 00:05:13.253 }, 00:05:13.253 "memory_domains": [ 00:05:13.253 { 00:05:13.253 "dma_device_id": "system", 00:05:13.253 "dma_device_type": 1 00:05:13.253 }, 00:05:13.253 { 00:05:13.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.253 "dma_device_type": 2 00:05:13.253 } 00:05:13.253 ], 00:05:13.253 "driver_specific": {} 00:05:13.253 } 00:05:13.253 ]' 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.253 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.253 [2024-11-20 13:24:25.134298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.253 [2024-11-20 13:24:25.134388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.253 [2024-11-20 13:24:25.134407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x533030 00:05:13.254 [2024-11-20 13:24:25.134416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.254 [2024-11-20 13:24:25.135848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.254 [2024-11-20 13:24:25.135879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.254 Passthru0 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.254 { 00:05:13.254 "name": "Malloc2", 00:05:13.254 "aliases": [ 00:05:13.254 "f2d6865a-15ca-4058-a5ce-e88b8281bfaa" 00:05:13.254 ], 00:05:13.254 "product_name": "Malloc disk", 00:05:13.254 "block_size": 512, 00:05:13.254 "num_blocks": 16384, 00:05:13.254 "uuid": "f2d6865a-15ca-4058-a5ce-e88b8281bfaa", 00:05:13.254 "assigned_rate_limits": { 00:05:13.254 "rw_ios_per_sec": 0, 00:05:13.254 "rw_mbytes_per_sec": 0, 00:05:13.254 "r_mbytes_per_sec": 0, 00:05:13.254 "w_mbytes_per_sec": 0 00:05:13.254 }, 00:05:13.254 "claimed": true, 00:05:13.254 "claim_type": "exclusive_write", 00:05:13.254 "zoned": false, 00:05:13.254 "supported_io_types": { 00:05:13.254 "read": true, 00:05:13.254 "write": true, 00:05:13.254 "unmap": true, 00:05:13.254 "flush": true, 00:05:13.254 "reset": true, 00:05:13.254 "nvme_admin": false, 00:05:13.254 "nvme_io": false, 00:05:13.254 "nvme_io_md": false, 00:05:13.254 "write_zeroes": true, 00:05:13.254 "zcopy": true, 00:05:13.254 "get_zone_info": false, 00:05:13.254 "zone_management": false, 00:05:13.254 "zone_append": false, 00:05:13.254 "compare": false, 00:05:13.254 "compare_and_write": false, 00:05:13.254 "abort": true, 00:05:13.254 "seek_hole": false, 00:05:13.254 "seek_data": false, 00:05:13.254 "copy": true, 00:05:13.254 "nvme_iov_md": false 00:05:13.254 }, 00:05:13.254 "memory_domains": [ 00:05:13.254 { 00:05:13.254 "dma_device_id": "system", 00:05:13.254 "dma_device_type": 1 00:05:13.254 }, 00:05:13.254 { 00:05:13.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.254 "dma_device_type": 2 00:05:13.254 } 00:05:13.254 ], 00:05:13.254 "driver_specific": {} 00:05:13.254 }, 00:05:13.254 { 00:05:13.254 "name": "Passthru0", 00:05:13.254 "aliases": [ 00:05:13.254 "e7d47b14-036b-511f-9075-c79331b3c70b" 00:05:13.254 ], 00:05:13.254 "product_name": "passthru", 00:05:13.254 "block_size": 512, 00:05:13.254 "num_blocks": 16384, 00:05:13.254 "uuid": "e7d47b14-036b-511f-9075-c79331b3c70b", 00:05:13.254 "assigned_rate_limits": { 00:05:13.254 "rw_ios_per_sec": 0, 00:05:13.254 "rw_mbytes_per_sec": 0, 00:05:13.254 "r_mbytes_per_sec": 0, 00:05:13.254 "w_mbytes_per_sec": 0 00:05:13.254 }, 00:05:13.254 "claimed": false, 00:05:13.254 "zoned": false, 00:05:13.254 "supported_io_types": { 00:05:13.254 "read": true, 00:05:13.254 "write": true, 00:05:13.254 "unmap": true, 00:05:13.254 "flush": true, 00:05:13.254 "reset": true, 00:05:13.254 "nvme_admin": false, 00:05:13.254 "nvme_io": false, 00:05:13.254 "nvme_io_md": false, 00:05:13.254 "write_zeroes": true, 00:05:13.254 "zcopy": true, 00:05:13.254 "get_zone_info": false, 00:05:13.254 "zone_management": false, 00:05:13.254 "zone_append": false, 00:05:13.254 "compare": false, 00:05:13.254 "compare_and_write": false, 00:05:13.254 "abort": true, 00:05:13.254 "seek_hole": false, 00:05:13.254 "seek_data": false, 00:05:13.254 "copy": true, 00:05:13.254 "nvme_iov_md": false 00:05:13.254 }, 00:05:13.254 "memory_domains": [ 00:05:13.254 { 00:05:13.254 "dma_device_id": "system", 00:05:13.254 "dma_device_type": 1 00:05:13.254 }, 00:05:13.254 { 00:05:13.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.254 "dma_device_type": 2 00:05:13.254 } 00:05:13.254 ], 00:05:13.254 "driver_specific": { 00:05:13.254 "passthru": { 00:05:13.254 "name": "Passthru0", 00:05:13.254 "base_bdev_name": "Malloc2" 00:05:13.254 } 00:05:13.254 } 00:05:13.254 } 00:05:13.254 ]' 00:05:13.254 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.512 00:05:13.512 real 0m0.324s 00:05:13.512 user 0m0.226s 00:05:13.512 sys 0m0.039s 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.512 ************************************ 00:05:13.512 END TEST rpc_daemon_integrity 00:05:13.512 13:24:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.512 ************************************ 00:05:13.512 13:24:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.512 13:24:25 rpc -- rpc/rpc.sh@84 -- # killprocess 56851 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 56851 ']' 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@958 -- # kill -0 56851 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56851 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.512 killing process with pid 56851 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56851' 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@973 -- # kill 56851 00:05:13.512 13:24:25 rpc -- common/autotest_common.sh@978 -- # wait 56851 00:05:14.079 00:05:14.079 real 0m2.496s 00:05:14.079 user 0m3.122s 00:05:14.079 sys 0m0.718s 00:05:14.079 13:24:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.079 13:24:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.079 ************************************ 00:05:14.079 END TEST rpc 00:05:14.079 ************************************ 00:05:14.079 13:24:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.079 13:24:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.079 13:24:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.079 13:24:25 -- common/autotest_common.sh@10 -- # set +x 00:05:14.079 ************************************ 00:05:14.079 START TEST skip_rpc 00:05:14.079 ************************************ 00:05:14.079 13:24:25 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.079 * Looking for test storage... 00:05:14.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.079 13:24:25 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.079 13:24:25 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.079 13:24:25 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.079 13:24:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.079 --rc genhtml_branch_coverage=1 00:05:14.079 --rc genhtml_function_coverage=1 00:05:14.079 --rc genhtml_legend=1 00:05:14.079 --rc geninfo_all_blocks=1 00:05:14.079 --rc geninfo_unexecuted_blocks=1 00:05:14.079 00:05:14.079 ' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.079 --rc genhtml_branch_coverage=1 00:05:14.079 --rc genhtml_function_coverage=1 00:05:14.079 --rc genhtml_legend=1 00:05:14.079 --rc geninfo_all_blocks=1 00:05:14.079 --rc geninfo_unexecuted_blocks=1 00:05:14.079 00:05:14.079 ' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.079 --rc genhtml_branch_coverage=1 00:05:14.079 --rc genhtml_function_coverage=1 00:05:14.079 --rc genhtml_legend=1 00:05:14.079 --rc geninfo_all_blocks=1 00:05:14.079 --rc geninfo_unexecuted_blocks=1 00:05:14.079 00:05:14.079 ' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.079 --rc genhtml_branch_coverage=1 00:05:14.079 --rc genhtml_function_coverage=1 00:05:14.079 --rc genhtml_legend=1 00:05:14.079 --rc geninfo_all_blocks=1 00:05:14.079 --rc geninfo_unexecuted_blocks=1 00:05:14.079 00:05:14.079 ' 00:05:14.079 13:24:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.079 13:24:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.079 13:24:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.079 13:24:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.338 ************************************ 00:05:14.338 START TEST skip_rpc 00:05:14.338 ************************************ 00:05:14.338 13:24:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:14.338 13:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57049 00:05:14.338 13:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.338 13:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.338 13:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.338 [2024-11-20 13:24:26.099053] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:14.338 [2024-11-20 13:24:26.099155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57049 ] 00:05:14.338 [2024-11-20 13:24:26.247714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.596 [2024-11-20 13:24:26.307798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.596 [2024-11-20 13:24:26.382186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57049 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57049 ']' 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57049 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57049 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.902 killing process with pid 57049 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57049' 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57049 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57049 00:05:19.902 00:05:19.902 real 0m5.469s 00:05:19.902 user 0m5.069s 00:05:19.902 sys 0m0.305s 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.902 13:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.902 ************************************ 00:05:19.902 END TEST skip_rpc 00:05:19.902 ************************************ 00:05:19.902 13:24:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.902 13:24:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.902 13:24:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.902 13:24:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.902 ************************************ 00:05:19.902 START TEST skip_rpc_with_json 00:05:19.902 ************************************ 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57130 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57130 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57130 ']' 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.902 13:24:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.902 [2024-11-20 13:24:31.656120] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:19.903 [2024-11-20 13:24:31.656278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57130 ] 00:05:19.903 [2024-11-20 13:24:31.815303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.160 [2024-11-20 13:24:31.868087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.160 [2024-11-20 13:24:31.946520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 [2024-11-20 13:24:32.701345] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:21.095 request: 00:05:21.095 { 00:05:21.095 "trtype": "tcp", 00:05:21.095 "method": "nvmf_get_transports", 00:05:21.095 "req_id": 1 00:05:21.095 } 00:05:21.095 Got JSON-RPC error response 00:05:21.095 response: 00:05:21.095 { 00:05:21.095 "code": -19, 00:05:21.095 "message": "No such device" 00:05:21.095 } 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 [2024-11-20 13:24:32.713521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.095 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.095 { 00:05:21.095 "subsystems": [ 00:05:21.095 { 00:05:21.095 "subsystem": "fsdev", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "fsdev_set_opts", 00:05:21.095 "params": { 00:05:21.095 "fsdev_io_pool_size": 65535, 00:05:21.095 "fsdev_io_cache_size": 256 00:05:21.095 } 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "keyring", 00:05:21.095 "config": [] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "iobuf", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "iobuf_set_options", 00:05:21.095 "params": { 00:05:21.095 "small_pool_count": 8192, 00:05:21.095 "large_pool_count": 1024, 00:05:21.095 "small_bufsize": 8192, 00:05:21.095 "large_bufsize": 135168, 00:05:21.095 "enable_numa": false 00:05:21.095 } 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "sock", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "sock_set_default_impl", 00:05:21.095 "params": { 00:05:21.095 "impl_name": "uring" 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "sock_impl_set_options", 00:05:21.095 "params": { 00:05:21.095 "impl_name": "ssl", 00:05:21.095 "recv_buf_size": 4096, 00:05:21.095 "send_buf_size": 4096, 00:05:21.095 "enable_recv_pipe": true, 00:05:21.095 "enable_quickack": false, 00:05:21.095 "enable_placement_id": 0, 00:05:21.095 "enable_zerocopy_send_server": true, 00:05:21.095 "enable_zerocopy_send_client": false, 00:05:21.095 "zerocopy_threshold": 0, 00:05:21.095 "tls_version": 0, 00:05:21.095 "enable_ktls": false 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "sock_impl_set_options", 00:05:21.095 "params": { 00:05:21.095 "impl_name": "posix", 00:05:21.095 "recv_buf_size": 2097152, 00:05:21.095 "send_buf_size": 2097152, 00:05:21.095 "enable_recv_pipe": true, 00:05:21.095 "enable_quickack": false, 00:05:21.095 "enable_placement_id": 0, 00:05:21.095 "enable_zerocopy_send_server": true, 00:05:21.095 "enable_zerocopy_send_client": false, 00:05:21.095 "zerocopy_threshold": 0, 00:05:21.095 "tls_version": 0, 00:05:21.095 "enable_ktls": false 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "sock_impl_set_options", 00:05:21.095 "params": { 00:05:21.095 "impl_name": "uring", 00:05:21.095 "recv_buf_size": 2097152, 00:05:21.095 "send_buf_size": 2097152, 00:05:21.095 "enable_recv_pipe": true, 00:05:21.095 "enable_quickack": false, 00:05:21.095 "enable_placement_id": 0, 00:05:21.095 "enable_zerocopy_send_server": false, 00:05:21.095 "enable_zerocopy_send_client": false, 00:05:21.095 "zerocopy_threshold": 0, 00:05:21.095 "tls_version": 0, 00:05:21.095 "enable_ktls": false 00:05:21.095 } 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "vmd", 00:05:21.095 "config": [] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "accel", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "accel_set_options", 00:05:21.095 "params": { 00:05:21.095 "small_cache_size": 128, 00:05:21.095 "large_cache_size": 16, 00:05:21.095 "task_count": 2048, 00:05:21.095 "sequence_count": 2048, 00:05:21.095 "buf_count": 2048 00:05:21.095 } 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "bdev", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "bdev_set_options", 00:05:21.095 "params": { 00:05:21.095 "bdev_io_pool_size": 65535, 00:05:21.095 "bdev_io_cache_size": 256, 00:05:21.095 "bdev_auto_examine": true, 00:05:21.095 "iobuf_small_cache_size": 128, 00:05:21.095 "iobuf_large_cache_size": 16 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "bdev_raid_set_options", 00:05:21.095 "params": { 00:05:21.095 "process_window_size_kb": 1024, 00:05:21.095 "process_max_bandwidth_mb_sec": 0 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "bdev_iscsi_set_options", 00:05:21.095 "params": { 00:05:21.095 "timeout_sec": 30 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "bdev_nvme_set_options", 00:05:21.095 "params": { 00:05:21.095 "action_on_timeout": "none", 00:05:21.095 "timeout_us": 0, 00:05:21.095 "timeout_admin_us": 0, 00:05:21.095 "keep_alive_timeout_ms": 10000, 00:05:21.095 "arbitration_burst": 0, 00:05:21.095 "low_priority_weight": 0, 00:05:21.095 "medium_priority_weight": 0, 00:05:21.095 "high_priority_weight": 0, 00:05:21.095 "nvme_adminq_poll_period_us": 10000, 00:05:21.095 "nvme_ioq_poll_period_us": 0, 00:05:21.095 "io_queue_requests": 0, 00:05:21.095 "delay_cmd_submit": true, 00:05:21.095 "transport_retry_count": 4, 00:05:21.095 "bdev_retry_count": 3, 00:05:21.095 "transport_ack_timeout": 0, 00:05:21.095 "ctrlr_loss_timeout_sec": 0, 00:05:21.095 "reconnect_delay_sec": 0, 00:05:21.095 "fast_io_fail_timeout_sec": 0, 00:05:21.095 "disable_auto_failback": false, 00:05:21.095 "generate_uuids": false, 00:05:21.095 "transport_tos": 0, 00:05:21.095 "nvme_error_stat": false, 00:05:21.095 "rdma_srq_size": 0, 00:05:21.095 "io_path_stat": false, 00:05:21.095 "allow_accel_sequence": false, 00:05:21.095 "rdma_max_cq_size": 0, 00:05:21.095 "rdma_cm_event_timeout_ms": 0, 00:05:21.095 "dhchap_digests": [ 00:05:21.095 "sha256", 00:05:21.095 "sha384", 00:05:21.095 "sha512" 00:05:21.095 ], 00:05:21.095 "dhchap_dhgroups": [ 00:05:21.095 "null", 00:05:21.095 "ffdhe2048", 00:05:21.095 "ffdhe3072", 00:05:21.095 "ffdhe4096", 00:05:21.095 "ffdhe6144", 00:05:21.095 "ffdhe8192" 00:05:21.095 ] 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "bdev_nvme_set_hotplug", 00:05:21.095 "params": { 00:05:21.095 "period_us": 100000, 00:05:21.095 "enable": false 00:05:21.095 } 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "method": "bdev_wait_for_examine" 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "scsi", 00:05:21.095 "config": null 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "scheduler", 00:05:21.095 "config": [ 00:05:21.095 { 00:05:21.095 "method": "framework_set_scheduler", 00:05:21.095 "params": { 00:05:21.095 "name": "static" 00:05:21.095 } 00:05:21.095 } 00:05:21.095 ] 00:05:21.095 }, 00:05:21.095 { 00:05:21.095 "subsystem": "vhost_scsi", 00:05:21.095 "config": [] 00:05:21.095 }, 00:05:21.096 { 00:05:21.096 "subsystem": "vhost_blk", 00:05:21.096 "config": [] 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "subsystem": "ublk", 00:05:21.096 "config": [] 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "subsystem": "nbd", 00:05:21.096 "config": [] 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "subsystem": "nvmf", 00:05:21.096 "config": [ 00:05:21.096 { 00:05:21.096 "method": "nvmf_set_config", 00:05:21.096 "params": { 00:05:21.096 "discovery_filter": "match_any", 00:05:21.096 "admin_cmd_passthru": { 00:05:21.096 "identify_ctrlr": false 00:05:21.096 }, 00:05:21.096 "dhchap_digests": [ 00:05:21.096 "sha256", 00:05:21.096 "sha384", 00:05:21.096 "sha512" 00:05:21.096 ], 00:05:21.096 "dhchap_dhgroups": [ 00:05:21.096 "null", 00:05:21.096 "ffdhe2048", 00:05:21.096 "ffdhe3072", 00:05:21.096 "ffdhe4096", 00:05:21.096 "ffdhe6144", 00:05:21.096 "ffdhe8192" 00:05:21.096 ] 00:05:21.096 } 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "method": "nvmf_set_max_subsystems", 00:05:21.096 "params": { 00:05:21.096 "max_subsystems": 1024 00:05:21.096 } 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "method": "nvmf_set_crdt", 00:05:21.096 "params": { 00:05:21.096 "crdt1": 0, 00:05:21.096 "crdt2": 0, 00:05:21.096 "crdt3": 0 00:05:21.096 } 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "method": "nvmf_create_transport", 00:05:21.096 "params": { 00:05:21.096 "trtype": "TCP", 00:05:21.096 "max_queue_depth": 128, 00:05:21.096 "max_io_qpairs_per_ctrlr": 127, 00:05:21.096 "in_capsule_data_size": 4096, 00:05:21.096 "max_io_size": 131072, 00:05:21.096 "io_unit_size": 131072, 00:05:21.096 "max_aq_depth": 128, 00:05:21.096 "num_shared_buffers": 511, 00:05:21.096 "buf_cache_size": 4294967295, 00:05:21.096 "dif_insert_or_strip": false, 00:05:21.096 "zcopy": false, 00:05:21.096 "c2h_success": true, 00:05:21.096 "sock_priority": 0, 00:05:21.096 "abort_timeout_sec": 1, 00:05:21.096 "ack_timeout": 0, 00:05:21.096 "data_wr_pool_size": 0 00:05:21.096 } 00:05:21.096 } 00:05:21.096 ] 00:05:21.096 }, 00:05:21.096 { 00:05:21.096 "subsystem": "iscsi", 00:05:21.096 "config": [ 00:05:21.096 { 00:05:21.096 "method": "iscsi_set_options", 00:05:21.096 "params": { 00:05:21.096 "node_base": "iqn.2016-06.io.spdk", 00:05:21.096 "max_sessions": 128, 00:05:21.096 "max_connections_per_session": 2, 00:05:21.096 "max_queue_depth": 64, 00:05:21.096 "default_time2wait": 2, 00:05:21.096 "default_time2retain": 20, 00:05:21.096 "first_burst_length": 8192, 00:05:21.096 "immediate_data": true, 00:05:21.096 "allow_duplicated_isid": false, 00:05:21.096 "error_recovery_level": 0, 00:05:21.096 "nop_timeout": 60, 00:05:21.096 "nop_in_interval": 30, 00:05:21.096 "disable_chap": false, 00:05:21.096 "require_chap": false, 00:05:21.096 "mutual_chap": false, 00:05:21.096 "chap_group": 0, 00:05:21.096 "max_large_datain_per_connection": 64, 00:05:21.096 "max_r2t_per_connection": 4, 00:05:21.096 "pdu_pool_size": 36864, 00:05:21.096 "immediate_data_pool_size": 16384, 00:05:21.096 "data_out_pool_size": 2048 00:05:21.096 } 00:05:21.096 } 00:05:21.096 ] 00:05:21.096 } 00:05:21.096 ] 00:05:21.096 } 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57130 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57130 ']' 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57130 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57130 00:05:21.096 killing process with pid 57130 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57130' 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57130 00:05:21.096 13:24:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57130 00:05:21.663 13:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57163 00:05:21.663 13:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.663 13:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57163 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57163 ']' 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57163 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57163 00:05:26.929 killing process with pid 57163 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57163' 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57163 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57163 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.929 ************************************ 00:05:26.929 END TEST skip_rpc_with_json 00:05:26.929 ************************************ 00:05:26.929 00:05:26.929 real 0m7.216s 00:05:26.929 user 0m6.923s 00:05:26.929 sys 0m0.786s 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.929 13:24:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.929 13:24:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.929 13:24:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.929 13:24:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.929 ************************************ 00:05:26.929 START TEST skip_rpc_with_delay 00:05:26.929 ************************************ 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:26.929 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.187 [2024-11-20 13:24:38.895015] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:27.187 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:27.187 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.187 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.188 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.188 ************************************ 00:05:27.188 END TEST skip_rpc_with_delay 00:05:27.188 ************************************ 00:05:27.188 00:05:27.188 real 0m0.094s 00:05:27.188 user 0m0.055s 00:05:27.188 sys 0m0.037s 00:05:27.188 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.188 13:24:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 13:24:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:27.188 13:24:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:27.188 13:24:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:27.188 13:24:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.188 13:24:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.188 13:24:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 ************************************ 00:05:27.188 START TEST exit_on_failed_rpc_init 00:05:27.188 ************************************ 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57273 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57273 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57273 ']' 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.188 13:24:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 [2024-11-20 13:24:39.037603] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:27.188 [2024-11-20 13:24:39.038390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57273 ] 00:05:27.445 [2024-11-20 13:24:39.186844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.445 [2024-11-20 13:24:39.250900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.445 [2024-11-20 13:24:39.326909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.379 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.379 [2024-11-20 13:24:40.137504] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:28.379 [2024-11-20 13:24:40.137596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57291 ] 00:05:28.379 [2024-11-20 13:24:40.280980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.638 [2024-11-20 13:24:40.344722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.638 [2024-11-20 13:24:40.344807] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.638 [2024-11-20 13:24:40.344822] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.638 [2024-11-20 13:24:40.344831] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57273 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57273 ']' 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57273 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57273 00:05:28.638 killing process with pid 57273 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57273' 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57273 00:05:28.638 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57273 00:05:28.897 ************************************ 00:05:28.897 END TEST exit_on_failed_rpc_init 00:05:28.897 ************************************ 00:05:28.897 00:05:28.897 real 0m1.855s 00:05:28.897 user 0m2.172s 00:05:28.897 sys 0m0.417s 00:05:28.897 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.897 13:24:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 13:24:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.156 ************************************ 00:05:29.156 END TEST skip_rpc 00:05:29.156 ************************************ 00:05:29.156 00:05:29.156 real 0m15.021s 00:05:29.156 user 0m14.402s 00:05:29.156 sys 0m1.745s 00:05:29.156 13:24:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.156 13:24:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 13:24:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.156 13:24:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.156 13:24:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.156 13:24:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 ************************************ 00:05:29.156 START TEST rpc_client 00:05:29.156 ************************************ 00:05:29.156 13:24:40 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.156 * Looking for test storage... 00:05:29.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:29.156 13:24:40 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.156 13:24:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.156 13:24:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.156 13:24:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:29.156 13:24:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.157 13:24:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:29.157 13:24:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.157 13:24:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.157 --rc genhtml_branch_coverage=1 00:05:29.157 --rc genhtml_function_coverage=1 00:05:29.157 --rc genhtml_legend=1 00:05:29.157 --rc geninfo_all_blocks=1 00:05:29.157 --rc geninfo_unexecuted_blocks=1 00:05:29.157 00:05:29.157 ' 00:05:29.157 13:24:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.157 --rc genhtml_branch_coverage=1 00:05:29.157 --rc genhtml_function_coverage=1 00:05:29.157 --rc genhtml_legend=1 00:05:29.157 --rc geninfo_all_blocks=1 00:05:29.157 --rc geninfo_unexecuted_blocks=1 00:05:29.157 00:05:29.157 ' 00:05:29.157 13:24:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.157 --rc genhtml_branch_coverage=1 00:05:29.157 --rc genhtml_function_coverage=1 00:05:29.157 --rc genhtml_legend=1 00:05:29.157 --rc geninfo_all_blocks=1 00:05:29.157 --rc geninfo_unexecuted_blocks=1 00:05:29.157 00:05:29.157 ' 00:05:29.157 13:24:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.157 --rc genhtml_branch_coverage=1 00:05:29.157 --rc genhtml_function_coverage=1 00:05:29.157 --rc genhtml_legend=1 00:05:29.157 --rc geninfo_all_blocks=1 00:05:29.157 --rc geninfo_unexecuted_blocks=1 00:05:29.157 00:05:29.157 ' 00:05:29.157 13:24:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:29.416 OK 00:05:29.416 13:24:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.416 00:05:29.416 real 0m0.212s 00:05:29.416 user 0m0.132s 00:05:29.416 sys 0m0.093s 00:05:29.416 13:24:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.416 13:24:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.416 ************************************ 00:05:29.416 END TEST rpc_client 00:05:29.416 ************************************ 00:05:29.416 13:24:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.416 13:24:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.416 13:24:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.416 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.416 ************************************ 00:05:29.416 START TEST json_config 00:05:29.416 ************************************ 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.416 13:24:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.416 13:24:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.416 13:24:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.416 13:24:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.416 13:24:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.416 13:24:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:29.416 13:24:41 json_config -- scripts/common.sh@345 -- # : 1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.416 13:24:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.416 13:24:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@353 -- # local d=1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.416 13:24:41 json_config -- scripts/common.sh@355 -- # echo 1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.416 13:24:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@353 -- # local d=2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.416 13:24:41 json_config -- scripts/common.sh@355 -- # echo 2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.416 13:24:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.416 13:24:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.416 13:24:41 json_config -- scripts/common.sh@368 -- # return 0 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.416 --rc genhtml_branch_coverage=1 00:05:29.416 --rc genhtml_function_coverage=1 00:05:29.416 --rc genhtml_legend=1 00:05:29.416 --rc geninfo_all_blocks=1 00:05:29.416 --rc geninfo_unexecuted_blocks=1 00:05:29.416 00:05:29.416 ' 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.416 --rc genhtml_branch_coverage=1 00:05:29.416 --rc genhtml_function_coverage=1 00:05:29.416 --rc genhtml_legend=1 00:05:29.416 --rc geninfo_all_blocks=1 00:05:29.416 --rc geninfo_unexecuted_blocks=1 00:05:29.416 00:05:29.416 ' 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.416 --rc genhtml_branch_coverage=1 00:05:29.416 --rc genhtml_function_coverage=1 00:05:29.416 --rc genhtml_legend=1 00:05:29.416 --rc geninfo_all_blocks=1 00:05:29.416 --rc geninfo_unexecuted_blocks=1 00:05:29.416 00:05:29.416 ' 00:05:29.416 13:24:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.416 --rc genhtml_branch_coverage=1 00:05:29.416 --rc genhtml_function_coverage=1 00:05:29.416 --rc genhtml_legend=1 00:05:29.416 --rc geninfo_all_blocks=1 00:05:29.416 --rc geninfo_unexecuted_blocks=1 00:05:29.416 00:05:29.416 ' 00:05:29.416 13:24:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.416 13:24:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.417 13:24:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.417 13:24:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.417 13:24:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.417 13:24:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.417 13:24:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.417 13:24:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.417 13:24:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.417 13:24:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.417 13:24:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@51 -- # : 0 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.417 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.417 13:24:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:29.417 13:24:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:29.676 INFO: JSON configuration test init 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.676 Waiting for target to run... 00:05:29.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.676 13:24:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.676 13:24:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.676 13:24:41 json_config -- json_config/common.sh@10 -- # shift 00:05:29.676 13:24:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.676 13:24:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.676 13:24:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.676 13:24:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.676 13:24:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.676 13:24:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57430 00:05:29.676 13:24:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.676 13:24:41 json_config -- json_config/common.sh@25 -- # waitforlisten 57430 /var/tmp/spdk_tgt.sock 00:05:29.676 13:24:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 57430 ']' 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.676 13:24:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.676 [2024-11-20 13:24:41.443733] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:29.676 [2024-11-20 13:24:41.444039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57430 ] 00:05:29.935 [2024-11-20 13:24:41.866692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.193 [2024-11-20 13:24:41.927381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:30.760 13:24:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:30.760 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.760 13:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:30.760 13:24:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:30.760 13:24:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.018 [2024-11-20 13:24:42.880652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.276 13:24:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:31.276 13:24:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:31.276 13:24:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.277 13:24:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:31.277 13:24:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:31.277 13:24:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@54 -- # sort 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:31.535 13:24:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.535 13:24:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:31.535 13:24:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.535 13:24:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:31.535 13:24:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.535 13:24:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.793 MallocForNvmf0 00:05:32.052 13:24:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.052 13:24:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.052 MallocForNvmf1 00:05:32.311 13:24:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.311 13:24:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.311 [2024-11-20 13:24:44.254526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.569 13:24:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.569 13:24:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.827 13:24:44 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.827 13:24:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.086 13:24:44 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.086 13:24:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.344 13:24:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.344 13:24:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.602 [2024-11-20 13:24:45.323256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.602 13:24:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:33.602 13:24:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.602 13:24:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 13:24:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:33.602 13:24:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.602 13:24:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 13:24:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:33.602 13:24:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.602 13:24:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.861 MallocBdevForConfigChangeCheck 00:05:33.861 13:24:45 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:33.861 13:24:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.861 13:24:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.861 13:24:45 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:33.861 13:24:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.428 INFO: shutting down applications... 00:05:34.428 13:24:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:34.428 13:24:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:34.428 13:24:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:34.428 13:24:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:34.428 13:24:46 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.687 Calling clear_iscsi_subsystem 00:05:34.687 Calling clear_nvmf_subsystem 00:05:34.687 Calling clear_nbd_subsystem 00:05:34.687 Calling clear_ublk_subsystem 00:05:34.687 Calling clear_vhost_blk_subsystem 00:05:34.687 Calling clear_vhost_scsi_subsystem 00:05:34.687 Calling clear_bdev_subsystem 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.687 13:24:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.268 13:24:46 json_config -- json_config/json_config.sh@352 -- # break 00:05:35.268 13:24:46 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:35.268 13:24:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:35.268 13:24:46 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.268 13:24:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.268 13:24:46 json_config -- json_config/common.sh@35 -- # [[ -n 57430 ]] 00:05:35.268 13:24:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57430 00:05:35.268 13:24:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.268 13:24:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.268 13:24:46 json_config -- json_config/common.sh@41 -- # kill -0 57430 00:05:35.268 13:24:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.532 13:24:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.532 13:24:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.532 13:24:47 json_config -- json_config/common.sh@41 -- # kill -0 57430 00:05:35.532 13:24:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.532 13:24:47 json_config -- json_config/common.sh@43 -- # break 00:05:35.532 13:24:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.532 SPDK target shutdown done 00:05:35.532 INFO: relaunching applications... 00:05:35.532 13:24:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.532 13:24:47 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:35.532 13:24:47 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.532 13:24:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.532 13:24:47 json_config -- json_config/common.sh@10 -- # shift 00:05:35.532 13:24:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.532 13:24:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.532 13:24:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.532 13:24:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.532 13:24:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.532 13:24:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57626 00:05:35.532 13:24:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.532 13:24:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.532 Waiting for target to run... 00:05:35.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.532 13:24:47 json_config -- json_config/common.sh@25 -- # waitforlisten 57626 /var/tmp/spdk_tgt.sock 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 57626 ']' 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.532 13:24:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.532 [2024-11-20 13:24:47.483277] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:35.532 [2024-11-20 13:24:47.483630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:05:36.099 [2024-11-20 13:24:47.922964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.099 [2024-11-20 13:24:47.974791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.358 [2024-11-20 13:24:48.113275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.616 [2024-11-20 13:24:48.332385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.616 [2024-11-20 13:24:48.364500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.616 13:24:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.616 13:24:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:36.616 00:05:36.616 13:24:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.616 13:24:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:36.616 INFO: Checking if target configuration is the same... 00:05:36.616 13:24:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:36.616 13:24:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.616 13:24:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:36.616 13:24:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.616 + '[' 2 -ne 2 ']' 00:05:36.616 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:36.616 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:36.616 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:36.616 +++ basename /dev/fd/62 00:05:36.616 ++ mktemp /tmp/62.XXX 00:05:36.616 + tmp_file_1=/tmp/62.2rb 00:05:36.616 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.616 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:36.616 + tmp_file_2=/tmp/spdk_tgt_config.json.bFf 00:05:36.616 + ret=0 00:05:36.616 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.184 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.184 + diff -u /tmp/62.2rb /tmp/spdk_tgt_config.json.bFf 00:05:37.184 INFO: JSON config files are the same 00:05:37.184 + echo 'INFO: JSON config files are the same' 00:05:37.184 + rm /tmp/62.2rb /tmp/spdk_tgt_config.json.bFf 00:05:37.184 + exit 0 00:05:37.184 13:24:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:37.184 INFO: changing configuration and checking if this can be detected... 00:05:37.184 13:24:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:37.184 13:24:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.184 13:24:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.442 13:24:49 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.442 13:24:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:37.442 13:24:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.442 + '[' 2 -ne 2 ']' 00:05:37.442 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:37.442 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:37.442 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:37.442 +++ basename /dev/fd/62 00:05:37.442 ++ mktemp /tmp/62.XXX 00:05:37.442 + tmp_file_1=/tmp/62.Dq8 00:05:37.442 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.442 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.442 + tmp_file_2=/tmp/spdk_tgt_config.json.qDR 00:05:37.442 + ret=0 00:05:37.442 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.009 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.009 + diff -u /tmp/62.Dq8 /tmp/spdk_tgt_config.json.qDR 00:05:38.009 + ret=1 00:05:38.009 + echo '=== Start of file: /tmp/62.Dq8 ===' 00:05:38.009 + cat /tmp/62.Dq8 00:05:38.009 + echo '=== End of file: /tmp/62.Dq8 ===' 00:05:38.009 + echo '' 00:05:38.009 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qDR ===' 00:05:38.009 + cat /tmp/spdk_tgt_config.json.qDR 00:05:38.009 + echo '=== End of file: /tmp/spdk_tgt_config.json.qDR ===' 00:05:38.009 + echo '' 00:05:38.009 + rm /tmp/62.Dq8 /tmp/spdk_tgt_config.json.qDR 00:05:38.009 + exit 1 00:05:38.009 INFO: configuration change detected. 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 57626 ]] 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.009 13:24:49 json_config -- json_config/json_config.sh@330 -- # killprocess 57626 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 57626 ']' 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@958 -- # kill -0 57626 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@959 -- # uname 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57626 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.009 killing process with pid 57626 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57626' 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@973 -- # kill 57626 00:05:38.009 13:24:49 json_config -- common/autotest_common.sh@978 -- # wait 57626 00:05:38.268 13:24:50 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.268 13:24:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:38.268 13:24:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.268 13:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.268 13:24:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:38.268 INFO: Success 00:05:38.268 13:24:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:38.268 00:05:38.268 real 0m8.929s 00:05:38.268 user 0m12.894s 00:05:38.268 sys 0m1.783s 00:05:38.268 13:24:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.268 13:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.268 ************************************ 00:05:38.268 END TEST json_config 00:05:38.268 ************************************ 00:05:38.268 13:24:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:38.268 13:24:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.268 13:24:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.268 13:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.268 ************************************ 00:05:38.268 START TEST json_config_extra_key 00:05:38.268 ************************************ 00:05:38.268 13:24:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:38.268 13:24:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.268 13:24:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.268 13:24:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.527 13:24:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.527 13:24:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:38.527 13:24:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.527 13:24:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.527 --rc genhtml_branch_coverage=1 00:05:38.527 --rc genhtml_function_coverage=1 00:05:38.527 --rc genhtml_legend=1 00:05:38.527 --rc geninfo_all_blocks=1 00:05:38.527 --rc geninfo_unexecuted_blocks=1 00:05:38.527 00:05:38.527 ' 00:05:38.527 13:24:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.527 --rc genhtml_branch_coverage=1 00:05:38.527 --rc genhtml_function_coverage=1 00:05:38.527 --rc genhtml_legend=1 00:05:38.527 --rc geninfo_all_blocks=1 00:05:38.527 --rc geninfo_unexecuted_blocks=1 00:05:38.527 00:05:38.528 ' 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.528 --rc genhtml_branch_coverage=1 00:05:38.528 --rc genhtml_function_coverage=1 00:05:38.528 --rc genhtml_legend=1 00:05:38.528 --rc geninfo_all_blocks=1 00:05:38.528 --rc geninfo_unexecuted_blocks=1 00:05:38.528 00:05:38.528 ' 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.528 --rc genhtml_branch_coverage=1 00:05:38.528 --rc genhtml_function_coverage=1 00:05:38.528 --rc genhtml_legend=1 00:05:38.528 --rc geninfo_all_blocks=1 00:05:38.528 --rc geninfo_unexecuted_blocks=1 00:05:38.528 00:05:38.528 ' 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.528 13:24:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.528 13:24:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.528 13:24:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.528 13:24:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.528 13:24:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.528 13:24:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.528 13:24:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.528 13:24:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:38.528 13:24:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.528 13:24:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.528 INFO: launching applications... 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:38.528 13:24:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57780 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.528 Waiting for target to run... 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57780 /var/tmp/spdk_tgt.sock 00:05:38.528 13:24:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57780 ']' 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.528 13:24:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.528 [2024-11-20 13:24:50.413670] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:38.528 [2024-11-20 13:24:50.413774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:05:39.096 [2024-11-20 13:24:50.850550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.096 [2024-11-20 13:24:50.903663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.096 [2024-11-20 13:24:50.937183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.664 13:24:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.664 13:24:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:39.664 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.664 INFO: shutting down applications... 00:05:39.664 13:24:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.664 13:24:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57780 ]] 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57780 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57780 00:05:39.664 13:24:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57780 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.230 SPDK target shutdown done 00:05:40.230 13:24:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.230 Success 00:05:40.230 13:24:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.230 00:05:40.230 real 0m1.784s 00:05:40.230 user 0m1.713s 00:05:40.230 sys 0m0.468s 00:05:40.230 13:24:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.230 13:24:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.230 ************************************ 00:05:40.230 END TEST json_config_extra_key 00:05:40.230 ************************************ 00:05:40.230 13:24:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.230 13:24:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.230 13:24:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.230 13:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.230 ************************************ 00:05:40.230 START TEST alias_rpc 00:05:40.230 ************************************ 00:05:40.230 13:24:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.230 * Looking for test storage... 00:05:40.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:40.230 13:24:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.230 13:24:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.230 13:24:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.230 13:24:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.230 13:24:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.231 13:24:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.489 13:24:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.489 13:24:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.489 13:24:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.489 13:24:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.489 13:24:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.489 --rc genhtml_branch_coverage=1 00:05:40.489 --rc genhtml_function_coverage=1 00:05:40.489 --rc genhtml_legend=1 00:05:40.489 --rc geninfo_all_blocks=1 00:05:40.489 --rc geninfo_unexecuted_blocks=1 00:05:40.489 00:05:40.489 ' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.489 --rc genhtml_branch_coverage=1 00:05:40.489 --rc genhtml_function_coverage=1 00:05:40.489 --rc genhtml_legend=1 00:05:40.489 --rc geninfo_all_blocks=1 00:05:40.489 --rc geninfo_unexecuted_blocks=1 00:05:40.489 00:05:40.489 ' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.489 --rc genhtml_branch_coverage=1 00:05:40.489 --rc genhtml_function_coverage=1 00:05:40.489 --rc genhtml_legend=1 00:05:40.489 --rc geninfo_all_blocks=1 00:05:40.489 --rc geninfo_unexecuted_blocks=1 00:05:40.489 00:05:40.489 ' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.489 --rc genhtml_branch_coverage=1 00:05:40.489 --rc genhtml_function_coverage=1 00:05:40.489 --rc genhtml_legend=1 00:05:40.489 --rc geninfo_all_blocks=1 00:05:40.489 --rc geninfo_unexecuted_blocks=1 00:05:40.489 00:05:40.489 ' 00:05:40.489 13:24:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.489 13:24:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57858 00:05:40.489 13:24:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.489 13:24:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57858 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57858 ']' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.489 13:24:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.489 [2024-11-20 13:24:52.254955] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:40.489 [2024-11-20 13:24:52.255069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57858 ] 00:05:40.489 [2024-11-20 13:24:52.403695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.748 [2024-11-20 13:24:52.468676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.748 [2024-11-20 13:24:52.541960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.006 13:24:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.006 13:24:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.006 13:24:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.263 13:24:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57858 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57858 ']' 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57858 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57858 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.263 killing process with pid 57858 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57858' 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 57858 00:05:41.263 13:24:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 57858 00:05:41.521 00:05:41.521 real 0m1.462s 00:05:41.521 user 0m1.518s 00:05:41.521 sys 0m0.430s 00:05:41.521 13:24:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.521 13:24:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.521 ************************************ 00:05:41.521 END TEST alias_rpc 00:05:41.521 ************************************ 00:05:41.780 13:24:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:41.780 13:24:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.780 13:24:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.780 13:24:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.780 13:24:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.780 ************************************ 00:05:41.780 START TEST spdkcli_tcp 00:05:41.780 ************************************ 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.780 * Looking for test storage... 00:05:41.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.780 13:24:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.780 --rc genhtml_branch_coverage=1 00:05:41.780 --rc genhtml_function_coverage=1 00:05:41.780 --rc genhtml_legend=1 00:05:41.780 --rc geninfo_all_blocks=1 00:05:41.780 --rc geninfo_unexecuted_blocks=1 00:05:41.780 00:05:41.780 ' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.780 --rc genhtml_branch_coverage=1 00:05:41.780 --rc genhtml_function_coverage=1 00:05:41.780 --rc genhtml_legend=1 00:05:41.780 --rc geninfo_all_blocks=1 00:05:41.780 --rc geninfo_unexecuted_blocks=1 00:05:41.780 00:05:41.780 ' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.780 --rc genhtml_branch_coverage=1 00:05:41.780 --rc genhtml_function_coverage=1 00:05:41.780 --rc genhtml_legend=1 00:05:41.780 --rc geninfo_all_blocks=1 00:05:41.780 --rc geninfo_unexecuted_blocks=1 00:05:41.780 00:05:41.780 ' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.780 --rc genhtml_branch_coverage=1 00:05:41.780 --rc genhtml_function_coverage=1 00:05:41.780 --rc genhtml_legend=1 00:05:41.780 --rc geninfo_all_blocks=1 00:05:41.780 --rc geninfo_unexecuted_blocks=1 00:05:41.780 00:05:41.780 ' 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57929 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.780 13:24:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57929 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57929 ']' 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.780 13:24:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.038 [2024-11-20 13:24:53.764861] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:42.038 [2024-11-20 13:24:53.764964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57929 ] 00:05:42.038 [2024-11-20 13:24:53.912673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.038 [2024-11-20 13:24:53.982289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.038 [2024-11-20 13:24:53.982304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.297 [2024-11-20 13:24:54.057193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.555 13:24:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.555 13:24:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:42.555 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57944 00:05:42.555 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.555 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.815 [ 00:05:42.815 "bdev_malloc_delete", 00:05:42.815 "bdev_malloc_create", 00:05:42.815 "bdev_null_resize", 00:05:42.815 "bdev_null_delete", 00:05:42.815 "bdev_null_create", 00:05:42.815 "bdev_nvme_cuse_unregister", 00:05:42.815 "bdev_nvme_cuse_register", 00:05:42.815 "bdev_opal_new_user", 00:05:42.815 "bdev_opal_set_lock_state", 00:05:42.815 "bdev_opal_delete", 00:05:42.815 "bdev_opal_get_info", 00:05:42.815 "bdev_opal_create", 00:05:42.815 "bdev_nvme_opal_revert", 00:05:42.815 "bdev_nvme_opal_init", 00:05:42.815 "bdev_nvme_send_cmd", 00:05:42.815 "bdev_nvme_set_keys", 00:05:42.815 "bdev_nvme_get_path_iostat", 00:05:42.815 "bdev_nvme_get_mdns_discovery_info", 00:05:42.815 "bdev_nvme_stop_mdns_discovery", 00:05:42.815 "bdev_nvme_start_mdns_discovery", 00:05:42.815 "bdev_nvme_set_multipath_policy", 00:05:42.815 "bdev_nvme_set_preferred_path", 00:05:42.815 "bdev_nvme_get_io_paths", 00:05:42.815 "bdev_nvme_remove_error_injection", 00:05:42.815 "bdev_nvme_add_error_injection", 00:05:42.815 "bdev_nvme_get_discovery_info", 00:05:42.815 "bdev_nvme_stop_discovery", 00:05:42.815 "bdev_nvme_start_discovery", 00:05:42.815 "bdev_nvme_get_controller_health_info", 00:05:42.815 "bdev_nvme_disable_controller", 00:05:42.815 "bdev_nvme_enable_controller", 00:05:42.815 "bdev_nvme_reset_controller", 00:05:42.815 "bdev_nvme_get_transport_statistics", 00:05:42.815 "bdev_nvme_apply_firmware", 00:05:42.815 "bdev_nvme_detach_controller", 00:05:42.815 "bdev_nvme_get_controllers", 00:05:42.815 "bdev_nvme_attach_controller", 00:05:42.815 "bdev_nvme_set_hotplug", 00:05:42.815 "bdev_nvme_set_options", 00:05:42.815 "bdev_passthru_delete", 00:05:42.815 "bdev_passthru_create", 00:05:42.815 "bdev_lvol_set_parent_bdev", 00:05:42.815 "bdev_lvol_set_parent", 00:05:42.815 "bdev_lvol_check_shallow_copy", 00:05:42.815 "bdev_lvol_start_shallow_copy", 00:05:42.815 "bdev_lvol_grow_lvstore", 00:05:42.815 "bdev_lvol_get_lvols", 00:05:42.815 "bdev_lvol_get_lvstores", 00:05:42.815 "bdev_lvol_delete", 00:05:42.815 "bdev_lvol_set_read_only", 00:05:42.815 "bdev_lvol_resize", 00:05:42.815 "bdev_lvol_decouple_parent", 00:05:42.815 "bdev_lvol_inflate", 00:05:42.815 "bdev_lvol_rename", 00:05:42.815 "bdev_lvol_clone_bdev", 00:05:42.815 "bdev_lvol_clone", 00:05:42.815 "bdev_lvol_snapshot", 00:05:42.815 "bdev_lvol_create", 00:05:42.815 "bdev_lvol_delete_lvstore", 00:05:42.815 "bdev_lvol_rename_lvstore", 00:05:42.815 "bdev_lvol_create_lvstore", 00:05:42.815 "bdev_raid_set_options", 00:05:42.815 "bdev_raid_remove_base_bdev", 00:05:42.815 "bdev_raid_add_base_bdev", 00:05:42.815 "bdev_raid_delete", 00:05:42.815 "bdev_raid_create", 00:05:42.815 "bdev_raid_get_bdevs", 00:05:42.815 "bdev_error_inject_error", 00:05:42.815 "bdev_error_delete", 00:05:42.815 "bdev_error_create", 00:05:42.815 "bdev_split_delete", 00:05:42.815 "bdev_split_create", 00:05:42.815 "bdev_delay_delete", 00:05:42.815 "bdev_delay_create", 00:05:42.815 "bdev_delay_update_latency", 00:05:42.815 "bdev_zone_block_delete", 00:05:42.815 "bdev_zone_block_create", 00:05:42.815 "blobfs_create", 00:05:42.815 "blobfs_detect", 00:05:42.815 "blobfs_set_cache_size", 00:05:42.815 "bdev_aio_delete", 00:05:42.815 "bdev_aio_rescan", 00:05:42.815 "bdev_aio_create", 00:05:42.815 "bdev_ftl_set_property", 00:05:42.815 "bdev_ftl_get_properties", 00:05:42.815 "bdev_ftl_get_stats", 00:05:42.815 "bdev_ftl_unmap", 00:05:42.815 "bdev_ftl_unload", 00:05:42.815 "bdev_ftl_delete", 00:05:42.815 "bdev_ftl_load", 00:05:42.815 "bdev_ftl_create", 00:05:42.815 "bdev_virtio_attach_controller", 00:05:42.815 "bdev_virtio_scsi_get_devices", 00:05:42.815 "bdev_virtio_detach_controller", 00:05:42.815 "bdev_virtio_blk_set_hotplug", 00:05:42.815 "bdev_iscsi_delete", 00:05:42.815 "bdev_iscsi_create", 00:05:42.816 "bdev_iscsi_set_options", 00:05:42.816 "bdev_uring_delete", 00:05:42.816 "bdev_uring_rescan", 00:05:42.816 "bdev_uring_create", 00:05:42.816 "accel_error_inject_error", 00:05:42.816 "ioat_scan_accel_module", 00:05:42.816 "dsa_scan_accel_module", 00:05:42.816 "iaa_scan_accel_module", 00:05:42.816 "keyring_file_remove_key", 00:05:42.816 "keyring_file_add_key", 00:05:42.816 "keyring_linux_set_options", 00:05:42.816 "fsdev_aio_delete", 00:05:42.816 "fsdev_aio_create", 00:05:42.816 "iscsi_get_histogram", 00:05:42.816 "iscsi_enable_histogram", 00:05:42.816 "iscsi_set_options", 00:05:42.816 "iscsi_get_auth_groups", 00:05:42.816 "iscsi_auth_group_remove_secret", 00:05:42.816 "iscsi_auth_group_add_secret", 00:05:42.816 "iscsi_delete_auth_group", 00:05:42.816 "iscsi_create_auth_group", 00:05:42.816 "iscsi_set_discovery_auth", 00:05:42.816 "iscsi_get_options", 00:05:42.816 "iscsi_target_node_request_logout", 00:05:42.816 "iscsi_target_node_set_redirect", 00:05:42.816 "iscsi_target_node_set_auth", 00:05:42.816 "iscsi_target_node_add_lun", 00:05:42.816 "iscsi_get_stats", 00:05:42.816 "iscsi_get_connections", 00:05:42.816 "iscsi_portal_group_set_auth", 00:05:42.816 "iscsi_start_portal_group", 00:05:42.816 "iscsi_delete_portal_group", 00:05:42.816 "iscsi_create_portal_group", 00:05:42.816 "iscsi_get_portal_groups", 00:05:42.816 "iscsi_delete_target_node", 00:05:42.816 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.816 "iscsi_target_node_add_pg_ig_maps", 00:05:42.816 "iscsi_create_target_node", 00:05:42.816 "iscsi_get_target_nodes", 00:05:42.816 "iscsi_delete_initiator_group", 00:05:42.816 "iscsi_initiator_group_remove_initiators", 00:05:42.816 "iscsi_initiator_group_add_initiators", 00:05:42.816 "iscsi_create_initiator_group", 00:05:42.816 "iscsi_get_initiator_groups", 00:05:42.816 "nvmf_set_crdt", 00:05:42.816 "nvmf_set_config", 00:05:42.816 "nvmf_set_max_subsystems", 00:05:42.816 "nvmf_stop_mdns_prr", 00:05:42.816 "nvmf_publish_mdns_prr", 00:05:42.816 "nvmf_subsystem_get_listeners", 00:05:42.816 "nvmf_subsystem_get_qpairs", 00:05:42.816 "nvmf_subsystem_get_controllers", 00:05:42.816 "nvmf_get_stats", 00:05:42.816 "nvmf_get_transports", 00:05:42.816 "nvmf_create_transport", 00:05:42.816 "nvmf_get_targets", 00:05:42.816 "nvmf_delete_target", 00:05:42.816 "nvmf_create_target", 00:05:42.816 "nvmf_subsystem_allow_any_host", 00:05:42.816 "nvmf_subsystem_set_keys", 00:05:42.816 "nvmf_subsystem_remove_host", 00:05:42.816 "nvmf_subsystem_add_host", 00:05:42.816 "nvmf_ns_remove_host", 00:05:42.816 "nvmf_ns_add_host", 00:05:42.816 "nvmf_subsystem_remove_ns", 00:05:42.816 "nvmf_subsystem_set_ns_ana_group", 00:05:42.816 "nvmf_subsystem_add_ns", 00:05:42.816 "nvmf_subsystem_listener_set_ana_state", 00:05:42.816 "nvmf_discovery_get_referrals", 00:05:42.816 "nvmf_discovery_remove_referral", 00:05:42.816 "nvmf_discovery_add_referral", 00:05:42.816 "nvmf_subsystem_remove_listener", 00:05:42.816 "nvmf_subsystem_add_listener", 00:05:42.816 "nvmf_delete_subsystem", 00:05:42.816 "nvmf_create_subsystem", 00:05:42.816 "nvmf_get_subsystems", 00:05:42.816 "env_dpdk_get_mem_stats", 00:05:42.816 "nbd_get_disks", 00:05:42.816 "nbd_stop_disk", 00:05:42.816 "nbd_start_disk", 00:05:42.816 "ublk_recover_disk", 00:05:42.816 "ublk_get_disks", 00:05:42.816 "ublk_stop_disk", 00:05:42.816 "ublk_start_disk", 00:05:42.816 "ublk_destroy_target", 00:05:42.816 "ublk_create_target", 00:05:42.816 "virtio_blk_create_transport", 00:05:42.816 "virtio_blk_get_transports", 00:05:42.816 "vhost_controller_set_coalescing", 00:05:42.816 "vhost_get_controllers", 00:05:42.816 "vhost_delete_controller", 00:05:42.816 "vhost_create_blk_controller", 00:05:42.816 "vhost_scsi_controller_remove_target", 00:05:42.816 "vhost_scsi_controller_add_target", 00:05:42.816 "vhost_start_scsi_controller", 00:05:42.816 "vhost_create_scsi_controller", 00:05:42.816 "thread_set_cpumask", 00:05:42.816 "scheduler_set_options", 00:05:42.816 "framework_get_governor", 00:05:42.816 "framework_get_scheduler", 00:05:42.816 "framework_set_scheduler", 00:05:42.816 "framework_get_reactors", 00:05:42.816 "thread_get_io_channels", 00:05:42.816 "thread_get_pollers", 00:05:42.816 "thread_get_stats", 00:05:42.816 "framework_monitor_context_switch", 00:05:42.816 "spdk_kill_instance", 00:05:42.816 "log_enable_timestamps", 00:05:42.816 "log_get_flags", 00:05:42.816 "log_clear_flag", 00:05:42.816 "log_set_flag", 00:05:42.816 "log_get_level", 00:05:42.816 "log_set_level", 00:05:42.816 "log_get_print_level", 00:05:42.816 "log_set_print_level", 00:05:42.816 "framework_enable_cpumask_locks", 00:05:42.816 "framework_disable_cpumask_locks", 00:05:42.816 "framework_wait_init", 00:05:42.816 "framework_start_init", 00:05:42.816 "scsi_get_devices", 00:05:42.816 "bdev_get_histogram", 00:05:42.816 "bdev_enable_histogram", 00:05:42.816 "bdev_set_qos_limit", 00:05:42.816 "bdev_set_qd_sampling_period", 00:05:42.816 "bdev_get_bdevs", 00:05:42.816 "bdev_reset_iostat", 00:05:42.816 "bdev_get_iostat", 00:05:42.816 "bdev_examine", 00:05:42.816 "bdev_wait_for_examine", 00:05:42.816 "bdev_set_options", 00:05:42.816 "accel_get_stats", 00:05:42.816 "accel_set_options", 00:05:42.816 "accel_set_driver", 00:05:42.816 "accel_crypto_key_destroy", 00:05:42.816 "accel_crypto_keys_get", 00:05:42.816 "accel_crypto_key_create", 00:05:42.816 "accel_assign_opc", 00:05:42.816 "accel_get_module_info", 00:05:42.816 "accel_get_opc_assignments", 00:05:42.816 "vmd_rescan", 00:05:42.816 "vmd_remove_device", 00:05:42.816 "vmd_enable", 00:05:42.816 "sock_get_default_impl", 00:05:42.816 "sock_set_default_impl", 00:05:42.816 "sock_impl_set_options", 00:05:42.816 "sock_impl_get_options", 00:05:42.816 "iobuf_get_stats", 00:05:42.816 "iobuf_set_options", 00:05:42.816 "keyring_get_keys", 00:05:42.816 "framework_get_pci_devices", 00:05:42.816 "framework_get_config", 00:05:42.816 "framework_get_subsystems", 00:05:42.816 "fsdev_set_opts", 00:05:42.816 "fsdev_get_opts", 00:05:42.816 "trace_get_info", 00:05:42.816 "trace_get_tpoint_group_mask", 00:05:42.816 "trace_disable_tpoint_group", 00:05:42.816 "trace_enable_tpoint_group", 00:05:42.816 "trace_clear_tpoint_mask", 00:05:42.816 "trace_set_tpoint_mask", 00:05:42.816 "notify_get_notifications", 00:05:42.816 "notify_get_types", 00:05:42.816 "spdk_get_version", 00:05:42.816 "rpc_get_methods" 00:05:42.816 ] 00:05:42.816 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.816 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.816 13:24:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57929 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57929 ']' 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57929 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57929 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.816 killing process with pid 57929 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57929' 00:05:42.816 13:24:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57929 00:05:42.817 13:24:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57929 00:05:43.075 00:05:43.075 real 0m1.512s 00:05:43.075 user 0m2.570s 00:05:43.075 sys 0m0.475s 00:05:43.075 13:24:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.075 13:24:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.075 ************************************ 00:05:43.075 END TEST spdkcli_tcp 00:05:43.075 ************************************ 00:05:43.334 13:24:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.334 13:24:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.334 13:24:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.334 13:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.334 ************************************ 00:05:43.334 START TEST dpdk_mem_utility 00:05:43.334 ************************************ 00:05:43.334 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.334 * Looking for test storage... 00:05:43.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:43.334 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.334 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.334 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.334 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.334 13:24:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.335 13:24:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.335 13:24:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.335 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58026 00:05:43.335 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58026 00:05:43.335 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58026 ']' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.335 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.593 [2024-11-20 13:24:55.326173] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:43.593 [2024-11-20 13:24:55.326288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:05:43.593 [2024-11-20 13:24:55.471631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.593 [2024-11-20 13:24:55.539183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.852 [2024-11-20 13:24:55.613713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.112 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.112 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.112 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.113 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.113 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.113 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.113 { 00:05:44.113 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.113 } 00:05:44.113 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.113 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.113 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:44.113 1 heaps totaling size 818.000000 MiB 00:05:44.113 size: 818.000000 MiB heap id: 0 00:05:44.113 end heaps---------- 00:05:44.113 9 mempools totaling size 603.782043 MiB 00:05:44.113 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.113 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.113 size: 100.555481 MiB name: bdev_io_58026 00:05:44.113 size: 50.003479 MiB name: msgpool_58026 00:05:44.113 size: 36.509338 MiB name: fsdev_io_58026 00:05:44.113 size: 21.763794 MiB name: PDU_Pool 00:05:44.113 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.113 size: 4.133484 MiB name: evtpool_58026 00:05:44.113 size: 0.026123 MiB name: Session_Pool 00:05:44.113 end mempools------- 00:05:44.113 6 memzones totaling size 4.142822 MiB 00:05:44.113 size: 1.000366 MiB name: RG_ring_0_58026 00:05:44.113 size: 1.000366 MiB name: RG_ring_1_58026 00:05:44.113 size: 1.000366 MiB name: RG_ring_4_58026 00:05:44.113 size: 1.000366 MiB name: RG_ring_5_58026 00:05:44.113 size: 0.125366 MiB name: RG_ring_2_58026 00:05:44.113 size: 0.015991 MiB name: RG_ring_3_58026 00:05:44.113 end memzones------- 00:05:44.113 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.113 heap id: 0 total size: 818.000000 MiB number of busy elements: 318 number of free elements: 15 00:05:44.113 list of free elements. size: 10.802307 MiB 00:05:44.113 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:44.113 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:44.113 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:44.113 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:44.113 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:44.113 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:44.113 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:44.113 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:44.113 element at address: 0x20001ae00000 with size: 0.567505 MiB 00:05:44.113 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:44.113 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:44.113 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:44.113 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:44.113 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:44.113 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:44.113 list of standard malloc elements. size: 199.268799 MiB 00:05:44.113 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:44.113 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:44.113 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.113 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:44.113 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:44.113 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.113 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:44.113 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.113 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:44.113 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:44.113 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:44.113 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:44.114 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:44.114 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:44.115 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:44.115 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:44.115 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:44.115 list of memzone associated elements. size: 607.928894 MiB 00:05:44.116 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:44.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.116 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:44.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.116 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:44.116 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58026_0 00:05:44.116 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:44.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58026_0 00:05:44.116 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:44.116 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58026_0 00:05:44.116 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:44.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.116 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:44.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.116 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:44.116 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58026_0 00:05:44.116 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:44.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58026 00:05:44.116 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58026 00:05:44.116 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.116 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.116 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:44.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.116 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:44.116 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.116 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58026 00:05:44.116 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58026 00:05:44.116 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58026 00:05:44.116 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:44.116 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58026 00:05:44.116 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58026 00:05:44.116 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58026 00:05:44.116 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.116 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:44.116 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.116 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:44.116 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.116 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:44.116 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58026 00:05:44.116 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:44.116 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58026 00:05:44.116 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:44.116 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.116 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:44.116 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.116 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:44.116 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58026 00:05:44.116 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:44.116 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.116 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:44.116 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58026 00:05:44.116 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:44.116 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58026 00:05:44.116 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:44.116 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58026 00:05:44.116 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:44.116 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.116 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.116 13:24:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58026 00:05:44.116 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58026 ']' 00:05:44.116 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58026 00:05:44.116 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:44.116 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.116 13:24:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58026 00:05:44.116 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.116 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.116 killing process with pid 58026 00:05:44.116 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58026' 00:05:44.116 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58026 00:05:44.116 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58026 00:05:44.684 00:05:44.684 real 0m1.325s 00:05:44.684 user 0m1.291s 00:05:44.684 sys 0m0.431s 00:05:44.684 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.684 13:24:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.684 ************************************ 00:05:44.684 END TEST dpdk_mem_utility 00:05:44.684 ************************************ 00:05:44.684 13:24:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:44.684 13:24:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.684 13:24:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.684 13:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:44.684 ************************************ 00:05:44.684 START TEST event 00:05:44.684 ************************************ 00:05:44.684 13:24:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:44.684 * Looking for test storage... 00:05:44.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.684 13:24:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.684 13:24:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.684 13:24:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.684 13:24:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.684 13:24:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.684 13:24:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.684 13:24:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.684 13:24:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.684 13:24:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.684 13:24:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.684 13:24:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.684 13:24:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.684 13:24:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.684 13:24:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.684 13:24:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.684 13:24:56 event -- scripts/common.sh@344 -- # case "$op" in 00:05:44.684 13:24:56 event -- scripts/common.sh@345 -- # : 1 00:05:44.684 13:24:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.684 13:24:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.684 13:24:56 event -- scripts/common.sh@365 -- # decimal 1 00:05:44.684 13:24:56 event -- scripts/common.sh@353 -- # local d=1 00:05:44.684 13:24:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.684 13:24:56 event -- scripts/common.sh@355 -- # echo 1 00:05:44.684 13:24:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.684 13:24:56 event -- scripts/common.sh@366 -- # decimal 2 00:05:44.684 13:24:56 event -- scripts/common.sh@353 -- # local d=2 00:05:44.684 13:24:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.684 13:24:56 event -- scripts/common.sh@355 -- # echo 2 00:05:44.684 13:24:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.684 13:24:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.685 13:24:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.685 13:24:56 event -- scripts/common.sh@368 -- # return 0 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.685 --rc genhtml_branch_coverage=1 00:05:44.685 --rc genhtml_function_coverage=1 00:05:44.685 --rc genhtml_legend=1 00:05:44.685 --rc geninfo_all_blocks=1 00:05:44.685 --rc geninfo_unexecuted_blocks=1 00:05:44.685 00:05:44.685 ' 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.685 --rc genhtml_branch_coverage=1 00:05:44.685 --rc genhtml_function_coverage=1 00:05:44.685 --rc genhtml_legend=1 00:05:44.685 --rc geninfo_all_blocks=1 00:05:44.685 --rc geninfo_unexecuted_blocks=1 00:05:44.685 00:05:44.685 ' 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.685 --rc genhtml_branch_coverage=1 00:05:44.685 --rc genhtml_function_coverage=1 00:05:44.685 --rc genhtml_legend=1 00:05:44.685 --rc geninfo_all_blocks=1 00:05:44.685 --rc geninfo_unexecuted_blocks=1 00:05:44.685 00:05:44.685 ' 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.685 --rc genhtml_branch_coverage=1 00:05:44.685 --rc genhtml_function_coverage=1 00:05:44.685 --rc genhtml_legend=1 00:05:44.685 --rc geninfo_all_blocks=1 00:05:44.685 --rc geninfo_unexecuted_blocks=1 00:05:44.685 00:05:44.685 ' 00:05:44.685 13:24:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:44.685 13:24:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.685 13:24:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:44.685 13:24:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.943 13:24:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.943 ************************************ 00:05:44.943 START TEST event_perf 00:05:44.943 ************************************ 00:05:44.943 13:24:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.943 Running I/O for 1 seconds...[2024-11-20 13:24:56.667035] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:44.943 [2024-11-20 13:24:56.667137] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58098 ] 00:05:44.943 [2024-11-20 13:24:56.811781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.943 [2024-11-20 13:24:56.881028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.943 [2024-11-20 13:24:56.881164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.943 Running I/O for 1 seconds...[2024-11-20 13:24:56.882353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.943 [2024-11-20 13:24:56.882367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.318 00:05:46.318 lcore 0: 192058 00:05:46.318 lcore 1: 192058 00:05:46.318 lcore 2: 192059 00:05:46.318 lcore 3: 192058 00:05:46.318 done. 00:05:46.318 00:05:46.318 real 0m1.293s 00:05:46.318 user 0m4.102s 00:05:46.318 sys 0m0.051s 00:05:46.318 13:24:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.318 13:24:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.318 ************************************ 00:05:46.318 END TEST event_perf 00:05:46.318 ************************************ 00:05:46.318 13:24:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.318 13:24:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.318 13:24:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.318 13:24:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.318 ************************************ 00:05:46.318 START TEST event_reactor 00:05:46.318 ************************************ 00:05:46.318 13:24:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.318 [2024-11-20 13:24:58.009173] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:46.318 [2024-11-20 13:24:58.009289] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58136 ] 00:05:46.318 [2024-11-20 13:24:58.149289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.318 [2024-11-20 13:24:58.216162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.731 test_start 00:05:47.731 oneshot 00:05:47.731 tick 100 00:05:47.731 tick 100 00:05:47.731 tick 250 00:05:47.731 tick 100 00:05:47.731 tick 100 00:05:47.731 tick 250 00:05:47.731 tick 500 00:05:47.731 tick 100 00:05:47.731 tick 100 00:05:47.731 tick 100 00:05:47.731 tick 250 00:05:47.731 tick 100 00:05:47.731 tick 100 00:05:47.731 test_end 00:05:47.731 00:05:47.731 real 0m1.282s 00:05:47.731 user 0m1.135s 00:05:47.731 sys 0m0.041s 00:05:47.731 13:24:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.731 ************************************ 00:05:47.731 END TEST event_reactor 00:05:47.731 ************************************ 00:05:47.731 13:24:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.731 13:24:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.731 13:24:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.731 13:24:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.731 13:24:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.731 ************************************ 00:05:47.731 START TEST event_reactor_perf 00:05:47.731 ************************************ 00:05:47.731 13:24:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.731 [2024-11-20 13:24:59.338748] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:47.731 [2024-11-20 13:24:59.338833] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:05:47.731 [2024-11-20 13:24:59.479686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.731 [2024-11-20 13:24:59.544897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.666 test_start 00:05:48.666 test_end 00:05:48.666 Performance: 378514 events per second 00:05:48.666 00:05:48.666 real 0m1.278s 00:05:48.666 user 0m1.129s 00:05:48.666 sys 0m0.042s 00:05:48.666 13:25:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.666 ************************************ 00:05:48.666 END TEST event_reactor_perf 00:05:48.666 13:25:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.666 ************************************ 00:05:48.924 13:25:00 event -- event/event.sh@49 -- # uname -s 00:05:48.924 13:25:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.924 13:25:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.924 13:25:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.924 13:25:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.924 13:25:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.924 ************************************ 00:05:48.924 START TEST event_scheduler 00:05:48.924 ************************************ 00:05:48.924 13:25:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.924 * Looking for test storage... 00:05:48.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:48.924 13:25:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.924 13:25:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.924 13:25:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.924 13:25:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:48.924 13:25:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.925 13:25:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.925 --rc genhtml_branch_coverage=1 00:05:48.925 --rc genhtml_function_coverage=1 00:05:48.925 --rc genhtml_legend=1 00:05:48.925 --rc geninfo_all_blocks=1 00:05:48.925 --rc geninfo_unexecuted_blocks=1 00:05:48.925 00:05:48.925 ' 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.925 --rc genhtml_branch_coverage=1 00:05:48.925 --rc genhtml_function_coverage=1 00:05:48.925 --rc genhtml_legend=1 00:05:48.925 --rc geninfo_all_blocks=1 00:05:48.925 --rc geninfo_unexecuted_blocks=1 00:05:48.925 00:05:48.925 ' 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.925 --rc genhtml_branch_coverage=1 00:05:48.925 --rc genhtml_function_coverage=1 00:05:48.925 --rc genhtml_legend=1 00:05:48.925 --rc geninfo_all_blocks=1 00:05:48.925 --rc geninfo_unexecuted_blocks=1 00:05:48.925 00:05:48.925 ' 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.925 --rc genhtml_branch_coverage=1 00:05:48.925 --rc genhtml_function_coverage=1 00:05:48.925 --rc genhtml_legend=1 00:05:48.925 --rc geninfo_all_blocks=1 00:05:48.925 --rc geninfo_unexecuted_blocks=1 00:05:48.925 00:05:48.925 ' 00:05:48.925 13:25:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.925 13:25:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58236 00:05:48.925 13:25:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.925 13:25:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.925 13:25:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58236 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58236 ']' 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.925 13:25:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.183 [2024-11-20 13:25:00.906052] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:49.183 [2024-11-20 13:25:00.906431] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58236 ] 00:05:49.183 [2024-11-20 13:25:01.051830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.183 [2024-11-20 13:25:01.127792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.183 [2024-11-20 13:25:01.127930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.183 [2024-11-20 13:25:01.128052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.183 [2024-11-20 13:25:01.128053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:49.442 13:25:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.442 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.442 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.442 POWER: Cannot set governor of lcore 0 to performance 00:05:49.442 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.442 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.442 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.442 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.442 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:49.442 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:49.442 POWER: Unable to set Power Management Environment for lcore 0 00:05:49.442 [2024-11-20 13:25:01.179025] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:49.442 [2024-11-20 13:25:01.179158] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:49.442 [2024-11-20 13:25:01.179314] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.442 [2024-11-20 13:25:01.179473] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.442 [2024-11-20 13:25:01.179604] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.442 [2024-11-20 13:25:01.179745] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.442 13:25:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 [2024-11-20 13:25:01.241749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.442 [2024-11-20 13:25:01.281946] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.442 13:25:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 ************************************ 00:05:49.442 START TEST scheduler_create_thread 00:05:49.442 ************************************ 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 2 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 3 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 4 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.442 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 5 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 6 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 7 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 8 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 9 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 10 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.443 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.701 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.701 13:25:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.702 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.702 13:25:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.078 13:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.078 13:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.078 13:25:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.078 13:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.078 13:25:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.011 13:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.011 00:05:52.011 real 0m2.615s 00:05:52.011 user 0m0.010s 00:05:52.011 sys 0m0.007s 00:05:52.011 ************************************ 00:05:52.011 END TEST scheduler_create_thread 00:05:52.011 ************************************ 00:05:52.011 13:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.011 13:25:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.011 13:25:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.011 13:25:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58236 00:05:52.011 13:25:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58236 ']' 00:05:52.011 13:25:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58236 00:05:52.011 13:25:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:52.012 13:25:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.012 13:25:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58236 00:05:52.269 killing process with pid 58236 00:05:52.269 13:25:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:52.269 13:25:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:52.269 13:25:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58236' 00:05:52.269 13:25:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58236 00:05:52.269 13:25:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58236 00:05:52.528 [2024-11-20 13:25:04.390623] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.787 00:05:52.787 real 0m3.964s 00:05:52.787 user 0m5.795s 00:05:52.787 sys 0m0.345s 00:05:52.787 13:25:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.787 13:25:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.787 ************************************ 00:05:52.787 END TEST event_scheduler 00:05:52.787 ************************************ 00:05:52.787 13:25:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.787 13:25:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.787 13:25:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.787 13:25:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.787 13:25:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.787 ************************************ 00:05:52.787 START TEST app_repeat 00:05:52.787 ************************************ 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.787 Process app_repeat pid: 58328 00:05:52.787 spdk_app_start Round 0 00:05:52.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58328 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58328' 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.787 13:25:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58328 /var/tmp/spdk-nbd.sock 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.787 13:25:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.787 [2024-11-20 13:25:04.715701] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:05:52.787 [2024-11-20 13:25:04.716019] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:05:53.045 [2024-11-20 13:25:04.865602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.045 [2024-11-20 13:25:04.933317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.045 [2024-11-20 13:25:04.933327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.045 [2024-11-20 13:25:04.992287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.303 13:25:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.303 13:25:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.303 13:25:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.562 Malloc0 00:05:53.562 13:25:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.821 Malloc1 00:05:53.821 13:25:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.821 13:25:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.080 /dev/nbd0 00:05:54.080 13:25:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.080 13:25:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.080 1+0 records in 00:05:54.080 1+0 records out 00:05:54.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025141 s, 16.3 MB/s 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.080 13:25:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.338 13:25:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.338 13:25:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.338 13:25:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.338 13:25:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.338 13:25:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.597 /dev/nbd1 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.597 1+0 records in 00:05:54.597 1+0 records out 00:05:54.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417348 s, 9.8 MB/s 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.597 13:25:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.597 13:25:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.856 { 00:05:54.856 "nbd_device": "/dev/nbd0", 00:05:54.856 "bdev_name": "Malloc0" 00:05:54.856 }, 00:05:54.856 { 00:05:54.856 "nbd_device": "/dev/nbd1", 00:05:54.856 "bdev_name": "Malloc1" 00:05:54.856 } 00:05:54.856 ]' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.856 { 00:05:54.856 "nbd_device": "/dev/nbd0", 00:05:54.856 "bdev_name": "Malloc0" 00:05:54.856 }, 00:05:54.856 { 00:05:54.856 "nbd_device": "/dev/nbd1", 00:05:54.856 "bdev_name": "Malloc1" 00:05:54.856 } 00:05:54.856 ]' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.856 /dev/nbd1' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.856 /dev/nbd1' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.856 256+0 records in 00:05:54.856 256+0 records out 00:05:54.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694805 s, 151 MB/s 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.856 256+0 records in 00:05:54.856 256+0 records out 00:05:54.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248969 s, 42.1 MB/s 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.856 256+0 records in 00:05:54.856 256+0 records out 00:05:54.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225466 s, 46.5 MB/s 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.856 13:25:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.115 13:25:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.373 13:25:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.631 13:25:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.631 13:25:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.631 13:25:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.890 13:25:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.890 13:25:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.148 13:25:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.148 [2024-11-20 13:25:08.098396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.407 [2024-11-20 13:25:08.145933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.407 [2024-11-20 13:25:08.145937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.407 [2024-11-20 13:25:08.204131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.407 [2024-11-20 13:25:08.204275] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.407 [2024-11-20 13:25:08.204290] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.697 13:25:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.697 spdk_app_start Round 1 00:05:59.697 13:25:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.697 13:25:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58328 /var/tmp/spdk-nbd.sock 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.697 13:25:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.697 13:25:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.697 13:25:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.697 13:25:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.697 Malloc0 00:05:59.697 13:25:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.957 Malloc1 00:05:59.957 13:25:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.957 13:25:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.216 /dev/nbd0 00:06:00.216 13:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.216 13:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.216 1+0 records in 00:06:00.216 1+0 records out 00:06:00.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412899 s, 9.9 MB/s 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.216 13:25:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.216 13:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.216 13:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.216 13:25:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.475 /dev/nbd1 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.475 1+0 records in 00:06:00.475 1+0 records out 00:06:00.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232745 s, 17.6 MB/s 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.475 13:25:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.475 13:25:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.761 13:25:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.761 { 00:06:00.761 "nbd_device": "/dev/nbd0", 00:06:00.761 "bdev_name": "Malloc0" 00:06:00.761 }, 00:06:00.761 { 00:06:00.761 "nbd_device": "/dev/nbd1", 00:06:00.761 "bdev_name": "Malloc1" 00:06:00.761 } 00:06:00.761 ]' 00:06:00.761 13:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.761 { 00:06:00.761 "nbd_device": "/dev/nbd0", 00:06:00.761 "bdev_name": "Malloc0" 00:06:00.761 }, 00:06:00.761 { 00:06:00.761 "nbd_device": "/dev/nbd1", 00:06:00.761 "bdev_name": "Malloc1" 00:06:00.761 } 00:06:00.761 ]' 00:06:00.761 13:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.761 13:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.761 /dev/nbd1' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.020 /dev/nbd1' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.020 256+0 records in 00:06:01.020 256+0 records out 00:06:01.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107114 s, 97.9 MB/s 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.020 256+0 records in 00:06:01.020 256+0 records out 00:06:01.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234281 s, 44.8 MB/s 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.020 256+0 records in 00:06:01.020 256+0 records out 00:06:01.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024628 s, 42.6 MB/s 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.020 13:25:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.280 13:25:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.539 13:25:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.798 13:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.058 13:25:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.058 13:25:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.317 13:25:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.317 [2024-11-20 13:25:14.257581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.575 [2024-11-20 13:25:14.316337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.575 [2024-11-20 13:25:14.316351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.576 [2024-11-20 13:25:14.375332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.576 [2024-11-20 13:25:14.375446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.576 [2024-11-20 13:25:14.375476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.866 13:25:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.866 spdk_app_start Round 2 00:06:05.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.866 13:25:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.866 13:25:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58328 /var/tmp/spdk-nbd.sock 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.866 13:25:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.866 13:25:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.866 Malloc0 00:06:05.866 13:25:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.125 Malloc1 00:06:06.125 13:25:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.125 13:25:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.384 /dev/nbd0 00:06:06.384 13:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.384 13:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.384 1+0 records in 00:06:06.384 1+0 records out 00:06:06.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297473 s, 13.8 MB/s 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.384 13:25:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.384 13:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.384 13:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.384 13:25:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.953 /dev/nbd1 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.953 1+0 records in 00:06:06.953 1+0 records out 00:06:06.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264568 s, 15.5 MB/s 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.953 13:25:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.953 13:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.212 13:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.212 { 00:06:07.212 "nbd_device": "/dev/nbd0", 00:06:07.212 "bdev_name": "Malloc0" 00:06:07.212 }, 00:06:07.212 { 00:06:07.212 "nbd_device": "/dev/nbd1", 00:06:07.212 "bdev_name": "Malloc1" 00:06:07.212 } 00:06:07.212 ]' 00:06:07.212 13:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.212 13:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.212 { 00:06:07.212 "nbd_device": "/dev/nbd0", 00:06:07.212 "bdev_name": "Malloc0" 00:06:07.212 }, 00:06:07.212 { 00:06:07.212 "nbd_device": "/dev/nbd1", 00:06:07.212 "bdev_name": "Malloc1" 00:06:07.212 } 00:06:07.212 ]' 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.212 /dev/nbd1' 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.212 /dev/nbd1' 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.212 256+0 records in 00:06:07.212 256+0 records out 00:06:07.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108132 s, 97.0 MB/s 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.212 256+0 records in 00:06:07.212 256+0 records out 00:06:07.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215722 s, 48.6 MB/s 00:06:07.212 13:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.213 256+0 records in 00:06:07.213 256+0 records out 00:06:07.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242354 s, 43.3 MB/s 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.213 13:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.472 13:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.039 13:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.298 13:25:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.298 13:25:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.557 13:25:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.815 [2024-11-20 13:25:20.582740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.815 [2024-11-20 13:25:20.633145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.815 [2024-11-20 13:25:20.633156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.815 [2024-11-20 13:25:20.692283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.815 [2024-11-20 13:25:20.692361] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.815 [2024-11-20 13:25:20.692374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.099 13:25:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58328 /var/tmp/spdk-nbd.sock 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.099 13:25:23 event.app_repeat -- event/event.sh@39 -- # killprocess 58328 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58328 ']' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58328 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58328 00:06:12.099 killing process with pid 58328 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58328' 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58328 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58328 00:06:12.099 spdk_app_start is called in Round 0. 00:06:12.099 Shutdown signal received, stop current app iteration 00:06:12.099 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:06:12.099 spdk_app_start is called in Round 1. 00:06:12.099 Shutdown signal received, stop current app iteration 00:06:12.099 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:06:12.099 spdk_app_start is called in Round 2. 00:06:12.099 Shutdown signal received, stop current app iteration 00:06:12.099 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:06:12.099 spdk_app_start is called in Round 3. 00:06:12.099 Shutdown signal received, stop current app iteration 00:06:12.099 13:25:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.099 13:25:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.099 00:06:12.099 real 0m19.257s 00:06:12.099 user 0m44.005s 00:06:12.099 sys 0m2.983s 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.099 ************************************ 00:06:12.099 END TEST app_repeat 00:06:12.099 ************************************ 00:06:12.099 13:25:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.099 13:25:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.099 13:25:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.099 13:25:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.099 13:25:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.099 13:25:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.099 ************************************ 00:06:12.099 START TEST cpu_locks 00:06:12.099 ************************************ 00:06:12.099 13:25:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.358 * Looking for test storage... 00:06:12.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.358 13:25:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.358 --rc genhtml_branch_coverage=1 00:06:12.358 --rc genhtml_function_coverage=1 00:06:12.358 --rc genhtml_legend=1 00:06:12.358 --rc geninfo_all_blocks=1 00:06:12.358 --rc geninfo_unexecuted_blocks=1 00:06:12.358 00:06:12.358 ' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.358 --rc genhtml_branch_coverage=1 00:06:12.358 --rc genhtml_function_coverage=1 00:06:12.358 --rc genhtml_legend=1 00:06:12.358 --rc geninfo_all_blocks=1 00:06:12.358 --rc geninfo_unexecuted_blocks=1 00:06:12.358 00:06:12.358 ' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.358 --rc genhtml_branch_coverage=1 00:06:12.358 --rc genhtml_function_coverage=1 00:06:12.358 --rc genhtml_legend=1 00:06:12.358 --rc geninfo_all_blocks=1 00:06:12.358 --rc geninfo_unexecuted_blocks=1 00:06:12.358 00:06:12.358 ' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.358 --rc genhtml_branch_coverage=1 00:06:12.358 --rc genhtml_function_coverage=1 00:06:12.358 --rc genhtml_legend=1 00:06:12.358 --rc geninfo_all_blocks=1 00:06:12.358 --rc geninfo_unexecuted_blocks=1 00:06:12.358 00:06:12.358 ' 00:06:12.358 13:25:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.358 13:25:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.358 13:25:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.358 13:25:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.358 13:25:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 ************************************ 00:06:12.358 START TEST default_locks 00:06:12.358 ************************************ 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58772 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58772 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58772 ']' 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.358 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 [2024-11-20 13:25:24.259712] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:12.358 [2024-11-20 13:25:24.259862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:06:12.617 [2024-11-20 13:25:24.407760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.617 [2024-11-20 13:25:24.465160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.617 [2024-11-20 13:25:24.540934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.875 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.875 13:25:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:12.875 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58772 00:06:12.875 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58772 00:06:12.875 13:25:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58772 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58772 ']' 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58772 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58772 00:06:13.440 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.441 killing process with pid 58772 00:06:13.441 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.441 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58772' 00:06:13.441 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58772 00:06:13.441 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58772 00:06:13.698 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58772 00:06:13.698 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:13.698 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58772 00:06:13.698 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58772 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58772 ']' 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 ERROR: process (pid: 58772) is no longer running 00:06:13.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58772) - No such process 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.956 00:06:13.956 real 0m1.474s 00:06:13.956 user 0m1.435s 00:06:13.956 sys 0m0.564s 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.956 13:25:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 ************************************ 00:06:13.956 END TEST default_locks 00:06:13.956 ************************************ 00:06:13.956 13:25:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.956 13:25:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.956 13:25:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.956 13:25:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 ************************************ 00:06:13.956 START TEST default_locks_via_rpc 00:06:13.956 ************************************ 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58811 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58811 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58811 ']' 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.956 13:25:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 [2024-11-20 13:25:25.780414] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:13.956 [2024-11-20 13:25:25.780508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:06:14.215 [2024-11-20 13:25:25.923903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.215 [2024-11-20 13:25:25.974532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.215 [2024-11-20 13:25:26.050939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58811 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58811 00:06:15.151 13:25:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58811 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58811 ']' 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58811 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58811 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.409 killing process with pid 58811 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58811' 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58811 00:06:15.409 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58811 00:06:15.976 00:06:15.976 real 0m1.965s 00:06:15.976 user 0m2.149s 00:06:15.976 sys 0m0.602s 00:06:15.976 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.976 13:25:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.976 ************************************ 00:06:15.976 END TEST default_locks_via_rpc 00:06:15.976 ************************************ 00:06:15.976 13:25:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.976 13:25:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.976 13:25:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.976 13:25:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.976 ************************************ 00:06:15.976 START TEST non_locking_app_on_locked_coremask 00:06:15.976 ************************************ 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58862 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58862 /var/tmp/spdk.sock 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58862 ']' 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.976 13:25:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.976 [2024-11-20 13:25:27.805600] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:15.976 [2024-11-20 13:25:27.805712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58862 ] 00:06:16.234 [2024-11-20 13:25:27.959908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.234 [2024-11-20 13:25:28.016765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.234 [2024-11-20 13:25:28.089039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58876 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58876 /var/tmp/spdk2.sock 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58876 ']' 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.507 13:25:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.507 [2024-11-20 13:25:28.364365] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:16.507 [2024-11-20 13:25:28.364473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:06:16.790 [2024-11-20 13:25:28.528848] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.790 [2024-11-20 13:25:28.528916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.790 [2024-11-20 13:25:28.656412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.049 [2024-11-20 13:25:28.812390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.615 13:25:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.615 13:25:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.615 13:25:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58862 00:06:17.615 13:25:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58862 00:06:17.615 13:25:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58862 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58862 ']' 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58862 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58862 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.549 killing process with pid 58862 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58862' 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58862 00:06:18.549 13:25:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58862 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58876 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58876 ']' 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58876 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58876 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.484 killing process with pid 58876 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58876' 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58876 00:06:19.484 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58876 00:06:19.743 00:06:19.743 real 0m3.797s 00:06:19.743 user 0m4.198s 00:06:19.743 sys 0m1.122s 00:06:19.743 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.743 13:25:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.743 ************************************ 00:06:19.743 END TEST non_locking_app_on_locked_coremask 00:06:19.743 ************************************ 00:06:19.743 13:25:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.743 13:25:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.743 13:25:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.743 13:25:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.743 ************************************ 00:06:19.743 START TEST locking_app_on_unlocked_coremask 00:06:19.743 ************************************ 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58943 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58943 /var/tmp/spdk.sock 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58943 ']' 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.743 13:25:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.743 [2024-11-20 13:25:31.634208] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:19.743 [2024-11-20 13:25:31.634303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58943 ] 00:06:20.002 [2024-11-20 13:25:31.778052] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.002 [2024-11-20 13:25:31.778112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.002 [2024-11-20 13:25:31.842581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.002 [2024-11-20 13:25:31.915796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58952 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58952 /var/tmp/spdk2.sock 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58952 ']' 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.260 13:25:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.260 [2024-11-20 13:25:32.191022] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:20.260 [2024-11-20 13:25:32.191160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58952 ] 00:06:20.519 [2024-11-20 13:25:32.355862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.777 [2024-11-20 13:25:32.491943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.777 [2024-11-20 13:25:32.650763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.343 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.343 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.343 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58952 00:06:21.343 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58952 00:06:21.343 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58943 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58943 ']' 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58943 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.909 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58943 00:06:22.168 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.168 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.168 killing process with pid 58943 00:06:22.168 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58943' 00:06:22.168 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58943 00:06:22.168 13:25:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58943 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58952 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58952 ']' 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58952 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.735 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58952 00:06:22.993 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.993 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.993 killing process with pid 58952 00:06:22.993 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58952' 00:06:22.993 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58952 00:06:22.993 13:25:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58952 00:06:23.292 00:06:23.292 real 0m3.501s 00:06:23.292 user 0m3.857s 00:06:23.292 sys 0m1.039s 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.292 ************************************ 00:06:23.292 END TEST locking_app_on_unlocked_coremask 00:06:23.292 ************************************ 00:06:23.292 13:25:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.292 13:25:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.292 13:25:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.292 13:25:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.292 ************************************ 00:06:23.292 START TEST locking_app_on_locked_coremask 00:06:23.292 ************************************ 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59019 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59019 /var/tmp/spdk.sock 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59019 ']' 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.292 13:25:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.292 [2024-11-20 13:25:35.204149] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:23.292 [2024-11-20 13:25:35.204292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59019 ] 00:06:23.550 [2024-11-20 13:25:35.348314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.550 [2024-11-20 13:25:35.413220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.550 [2024-11-20 13:25:35.487839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59035 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59035 /var/tmp/spdk2.sock 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59035 /var/tmp/spdk2.sock 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59035 /var/tmp/spdk2.sock 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59035 ']' 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.482 13:25:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 [2024-11-20 13:25:36.283257] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:24.482 [2024-11-20 13:25:36.283362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:06:24.741 [2024-11-20 13:25:36.441313] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59019 has claimed it. 00:06:24.741 [2024-11-20 13:25:36.441379] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.307 ERROR: process (pid: 59035) is no longer running 00:06:25.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59035) - No such process 00:06:25.307 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.307 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:25.307 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59019 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.308 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59019 ']' 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.567 killing process with pid 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59019' 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59019 00:06:25.567 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59019 00:06:26.134 00:06:26.134 real 0m2.750s 00:06:26.134 user 0m3.234s 00:06:26.134 sys 0m0.638s 00:06:26.134 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.134 13:25:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.134 ************************************ 00:06:26.134 END TEST locking_app_on_locked_coremask 00:06:26.134 ************************************ 00:06:26.134 13:25:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.134 13:25:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.134 13:25:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.134 13:25:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.134 ************************************ 00:06:26.134 START TEST locking_overlapped_coremask 00:06:26.134 ************************************ 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59080 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59080 /var/tmp/spdk.sock 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59080 ']' 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.134 13:25:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.134 [2024-11-20 13:25:38.009254] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:26.134 [2024-11-20 13:25:38.009388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:06:26.393 [2024-11-20 13:25:38.158140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.393 [2024-11-20 13:25:38.226945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.393 [2024-11-20 13:25:38.227019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.393 [2024-11-20 13:25:38.227025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.393 [2024-11-20 13:25:38.303627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.652 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59091 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59091 /var/tmp/spdk2.sock 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59091 /var/tmp/spdk2.sock 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59091 /var/tmp/spdk2.sock 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59091 ']' 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.653 13:25:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.653 [2024-11-20 13:25:38.592152] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:26.653 [2024-11-20 13:25:38.592281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59091 ] 00:06:26.911 [2024-11-20 13:25:38.761509] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59080 has claimed it. 00:06:26.911 [2024-11-20 13:25:38.765223] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59091) - No such process 00:06:27.555 ERROR: process (pid: 59091) is no longer running 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59080 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59080 ']' 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59080 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59080 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59080' 00:06:27.555 killing process with pid 59080 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59080 00:06:27.555 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59080 00:06:28.121 00:06:28.121 real 0m1.843s 00:06:28.121 user 0m4.983s 00:06:28.121 sys 0m0.438s 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.121 ************************************ 00:06:28.121 END TEST locking_overlapped_coremask 00:06:28.121 ************************************ 00:06:28.121 13:25:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.121 13:25:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.121 13:25:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.121 13:25:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.121 ************************************ 00:06:28.121 START TEST locking_overlapped_coremask_via_rpc 00:06:28.121 ************************************ 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59136 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59136 /var/tmp/spdk.sock 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59136 ']' 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.121 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.122 13:25:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.122 [2024-11-20 13:25:39.891369] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:28.122 [2024-11-20 13:25:39.891475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59136 ] 00:06:28.122 [2024-11-20 13:25:40.036907] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.122 [2024-11-20 13:25:40.036967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.380 [2024-11-20 13:25:40.103850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.380 [2024-11-20 13:25:40.104015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.380 [2024-11-20 13:25:40.104026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.380 [2024-11-20 13:25:40.178744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59154 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59154 /var/tmp/spdk2.sock 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59154 ']' 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.948 13:25:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.206 [2024-11-20 13:25:40.947848] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:29.206 [2024-11-20 13:25:40.947978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:06:29.206 [2024-11-20 13:25:41.109389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.206 [2024-11-20 13:25:41.109467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.465 [2024-11-20 13:25:41.253126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.465 [2024-11-20 13:25:41.253234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.465 [2024-11-20 13:25:41.253237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.465 [2024-11-20 13:25:41.410213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.031 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.295 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.295 13:25:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.295 [2024-11-20 13:25:41.995365] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59136 has claimed it. 00:06:30.295 request: 00:06:30.295 { 00:06:30.295 "method": "framework_enable_cpumask_locks", 00:06:30.295 "req_id": 1 00:06:30.295 } 00:06:30.295 Got JSON-RPC error response 00:06:30.295 response: 00:06:30.295 { 00:06:30.295 "code": -32603, 00:06:30.295 "message": "Failed to claim CPU core: 2" 00:06:30.295 } 00:06:30.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.295 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.295 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59136 /var/tmp/spdk.sock 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59136 ']' 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.296 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59154 /var/tmp/spdk2.sock 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59154 ']' 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.554 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.813 ************************************ 00:06:30.813 END TEST locking_overlapped_coremask_via_rpc 00:06:30.813 ************************************ 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.813 00:06:30.813 real 0m2.709s 00:06:30.813 user 0m1.432s 00:06:30.813 sys 0m0.199s 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.813 13:25:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.813 13:25:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.813 13:25:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59136 ]] 00:06:30.813 13:25:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59136 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59136 ']' 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59136 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59136 00:06:30.813 killing process with pid 59136 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59136' 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59136 00:06:30.813 13:25:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59136 00:06:31.072 13:25:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59154 ]] 00:06:31.072 13:25:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59154 00:06:31.072 13:25:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59154 ']' 00:06:31.072 13:25:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59154 00:06:31.072 13:25:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.072 13:25:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.072 13:25:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59154 00:06:31.330 killing process with pid 59154 00:06:31.330 13:25:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:31.330 13:25:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:31.330 13:25:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59154' 00:06:31.330 13:25:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59154 00:06:31.330 13:25:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59154 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.588 Process with pid 59136 is not found 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59136 ]] 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59136 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59136 ']' 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59136 00:06:31.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59136) - No such process 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59136 is not found' 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59154 ]] 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59154 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59154 ']' 00:06:31.588 Process with pid 59154 is not found 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59154 00:06:31.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59154) - No such process 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59154 is not found' 00:06:31.588 13:25:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.588 ************************************ 00:06:31.588 END TEST cpu_locks 00:06:31.588 ************************************ 00:06:31.588 00:06:31.588 real 0m19.483s 00:06:31.588 user 0m34.474s 00:06:31.588 sys 0m5.530s 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.588 13:25:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.588 ************************************ 00:06:31.588 END TEST event 00:06:31.588 ************************************ 00:06:31.588 00:06:31.588 real 0m47.062s 00:06:31.588 user 1m30.853s 00:06:31.588 sys 0m9.276s 00:06:31.588 13:25:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.588 13:25:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.847 13:25:43 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.847 13:25:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.847 13:25:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.847 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:06:31.847 ************************************ 00:06:31.847 START TEST thread 00:06:31.847 ************************************ 00:06:31.847 13:25:43 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.847 * Looking for test storage... 00:06:31.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.847 13:25:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.847 13:25:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.847 13:25:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.847 13:25:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.847 13:25:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.847 13:25:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.847 13:25:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.847 13:25:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.847 13:25:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.847 13:25:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.847 13:25:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.847 13:25:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.847 13:25:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.847 13:25:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.847 13:25:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.847 13:25:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:31.847 13:25:43 thread -- scripts/common.sh@345 -- # : 1 00:06:31.847 13:25:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.847 13:25:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.847 13:25:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:31.847 13:25:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:31.847 13:25:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.847 13:25:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:31.847 13:25:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.847 13:25:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:31.847 13:25:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:31.847 13:25:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.847 13:25:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:31.847 13:25:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.847 13:25:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.847 13:25:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.847 13:25:43 thread -- scripts/common.sh@368 -- # return 0 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.848 --rc genhtml_branch_coverage=1 00:06:31.848 --rc genhtml_function_coverage=1 00:06:31.848 --rc genhtml_legend=1 00:06:31.848 --rc geninfo_all_blocks=1 00:06:31.848 --rc geninfo_unexecuted_blocks=1 00:06:31.848 00:06:31.848 ' 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.848 --rc genhtml_branch_coverage=1 00:06:31.848 --rc genhtml_function_coverage=1 00:06:31.848 --rc genhtml_legend=1 00:06:31.848 --rc geninfo_all_blocks=1 00:06:31.848 --rc geninfo_unexecuted_blocks=1 00:06:31.848 00:06:31.848 ' 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.848 --rc genhtml_branch_coverage=1 00:06:31.848 --rc genhtml_function_coverage=1 00:06:31.848 --rc genhtml_legend=1 00:06:31.848 --rc geninfo_all_blocks=1 00:06:31.848 --rc geninfo_unexecuted_blocks=1 00:06:31.848 00:06:31.848 ' 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.848 --rc genhtml_branch_coverage=1 00:06:31.848 --rc genhtml_function_coverage=1 00:06:31.848 --rc genhtml_legend=1 00:06:31.848 --rc geninfo_all_blocks=1 00:06:31.848 --rc geninfo_unexecuted_blocks=1 00:06:31.848 00:06:31.848 ' 00:06:31.848 13:25:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.848 13:25:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.848 ************************************ 00:06:31.848 START TEST thread_poller_perf 00:06:31.848 ************************************ 00:06:31.848 13:25:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.848 [2024-11-20 13:25:43.774993] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:31.848 [2024-11-20 13:25:43.775628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59285 ] 00:06:32.106 [2024-11-20 13:25:43.937100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.106 [2024-11-20 13:25:44.001953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.106 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.481 [2024-11-20T13:25:45.438Z] ====================================== 00:06:33.481 [2024-11-20T13:25:45.438Z] busy:2212072494 (cyc) 00:06:33.481 [2024-11-20T13:25:45.438Z] total_run_count: 310000 00:06:33.481 [2024-11-20T13:25:45.438Z] tsc_hz: 2200000000 (cyc) 00:06:33.481 [2024-11-20T13:25:45.438Z] ====================================== 00:06:33.481 [2024-11-20T13:25:45.438Z] poller_cost: 7135 (cyc), 3243 (nsec) 00:06:33.481 00:06:33.481 real 0m1.312s 00:06:33.482 user 0m1.148s 00:06:33.482 sys 0m0.052s 00:06:33.482 13:25:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.482 ************************************ 00:06:33.482 END TEST thread_poller_perf 00:06:33.482 ************************************ 00:06:33.482 13:25:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.482 13:25:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.482 13:25:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.482 13:25:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.482 13:25:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.482 ************************************ 00:06:33.482 START TEST thread_poller_perf 00:06:33.482 ************************************ 00:06:33.482 13:25:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.482 [2024-11-20 13:25:45.139058] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:33.482 [2024-11-20 13:25:45.139154] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:06:33.482 [2024-11-20 13:25:45.288162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.482 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.482 [2024-11-20 13:25:45.362171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.884 [2024-11-20T13:25:46.841Z] ====================================== 00:06:34.884 [2024-11-20T13:25:46.841Z] busy:2202661605 (cyc) 00:06:34.884 [2024-11-20T13:25:46.841Z] total_run_count: 3956000 00:06:34.884 [2024-11-20T13:25:46.841Z] tsc_hz: 2200000000 (cyc) 00:06:34.884 [2024-11-20T13:25:46.841Z] ====================================== 00:06:34.884 [2024-11-20T13:25:46.841Z] poller_cost: 556 (cyc), 252 (nsec) 00:06:34.884 00:06:34.884 real 0m1.294s 00:06:34.884 user 0m1.143s 00:06:34.884 sys 0m0.044s 00:06:34.884 13:25:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.884 ************************************ 00:06:34.884 END TEST thread_poller_perf 00:06:34.884 ************************************ 00:06:34.884 13:25:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 13:25:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.884 00:06:34.884 real 0m2.903s 00:06:34.884 user 0m2.441s 00:06:34.884 sys 0m0.241s 00:06:34.884 13:25:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.884 ************************************ 00:06:34.884 END TEST thread 00:06:34.884 ************************************ 00:06:34.884 13:25:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 13:25:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:34.884 13:25:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.884 13:25:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.884 13:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.884 13:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:34.884 ************************************ 00:06:34.884 START TEST app_cmdline 00:06:34.884 ************************************ 00:06:34.884 13:25:46 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.884 * Looking for test storage... 00:06:34.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:34.884 13:25:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.884 13:25:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.884 13:25:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.884 13:25:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:34.884 13:25:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.885 13:25:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.885 --rc genhtml_branch_coverage=1 00:06:34.885 --rc genhtml_function_coverage=1 00:06:34.885 --rc genhtml_legend=1 00:06:34.885 --rc geninfo_all_blocks=1 00:06:34.885 --rc geninfo_unexecuted_blocks=1 00:06:34.885 00:06:34.885 ' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.885 --rc genhtml_branch_coverage=1 00:06:34.885 --rc genhtml_function_coverage=1 00:06:34.885 --rc genhtml_legend=1 00:06:34.885 --rc geninfo_all_blocks=1 00:06:34.885 --rc geninfo_unexecuted_blocks=1 00:06:34.885 00:06:34.885 ' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.885 --rc genhtml_branch_coverage=1 00:06:34.885 --rc genhtml_function_coverage=1 00:06:34.885 --rc genhtml_legend=1 00:06:34.885 --rc geninfo_all_blocks=1 00:06:34.885 --rc geninfo_unexecuted_blocks=1 00:06:34.885 00:06:34.885 ' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.885 --rc genhtml_branch_coverage=1 00:06:34.885 --rc genhtml_function_coverage=1 00:06:34.885 --rc genhtml_legend=1 00:06:34.885 --rc geninfo_all_blocks=1 00:06:34.885 --rc geninfo_unexecuted_blocks=1 00:06:34.885 00:06:34.885 ' 00:06:34.885 13:25:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.885 13:25:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59403 00:06:34.885 13:25:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59403 00:06:34.885 13:25:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59403 ']' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.885 13:25:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.885 [2024-11-20 13:25:46.804220] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:34.885 [2024-11-20 13:25:46.804930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59403 ] 00:06:35.157 [2024-11-20 13:25:46.955080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.157 [2024-11-20 13:25:47.019391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.157 [2024-11-20 13:25:47.092944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.415 13:25:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.415 13:25:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:35.415 13:25:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:35.674 { 00:06:35.674 "version": "SPDK v25.01-pre git sha1 d2ebd983e", 00:06:35.674 "fields": { 00:06:35.674 "major": 25, 00:06:35.674 "minor": 1, 00:06:35.674 "patch": 0, 00:06:35.674 "suffix": "-pre", 00:06:35.674 "commit": "d2ebd983e" 00:06:35.674 } 00:06:35.674 } 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.674 13:25:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.674 13:25:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.674 13:25:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.674 13:25:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.932 13:25:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.932 13:25:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.932 13:25:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:35.932 13:25:47 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.191 request: 00:06:36.191 { 00:06:36.191 "method": "env_dpdk_get_mem_stats", 00:06:36.191 "req_id": 1 00:06:36.191 } 00:06:36.191 Got JSON-RPC error response 00:06:36.191 response: 00:06:36.191 { 00:06:36.191 "code": -32601, 00:06:36.191 "message": "Method not found" 00:06:36.191 } 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.191 13:25:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59403 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59403 ']' 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59403 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59403 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.191 killing process with pid 59403 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59403' 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 59403 00:06:36.191 13:25:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 59403 00:06:36.450 00:06:36.450 real 0m1.857s 00:06:36.450 user 0m2.245s 00:06:36.450 sys 0m0.486s 00:06:36.450 13:25:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.450 ************************************ 00:06:36.450 END TEST app_cmdline 00:06:36.450 ************************************ 00:06:36.450 13:25:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.709 13:25:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.709 13:25:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.709 13:25:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.709 13:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:36.709 ************************************ 00:06:36.709 START TEST version 00:06:36.709 ************************************ 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.709 * Looking for test storage... 00:06:36.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.709 13:25:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.709 13:25:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.709 13:25:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.709 13:25:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.709 13:25:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.709 13:25:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.709 13:25:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.709 13:25:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.709 13:25:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.709 13:25:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.709 13:25:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.709 13:25:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:36.709 13:25:48 version -- scripts/common.sh@345 -- # : 1 00:06:36.709 13:25:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.709 13:25:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.709 13:25:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:36.709 13:25:48 version -- scripts/common.sh@353 -- # local d=1 00:06:36.709 13:25:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.709 13:25:48 version -- scripts/common.sh@355 -- # echo 1 00:06:36.709 13:25:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.709 13:25:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:36.709 13:25:48 version -- scripts/common.sh@353 -- # local d=2 00:06:36.709 13:25:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.709 13:25:48 version -- scripts/common.sh@355 -- # echo 2 00:06:36.709 13:25:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.709 13:25:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.709 13:25:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.709 13:25:48 version -- scripts/common.sh@368 -- # return 0 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.709 --rc genhtml_branch_coverage=1 00:06:36.709 --rc genhtml_function_coverage=1 00:06:36.709 --rc genhtml_legend=1 00:06:36.709 --rc geninfo_all_blocks=1 00:06:36.709 --rc geninfo_unexecuted_blocks=1 00:06:36.709 00:06:36.709 ' 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.709 --rc genhtml_branch_coverage=1 00:06:36.709 --rc genhtml_function_coverage=1 00:06:36.709 --rc genhtml_legend=1 00:06:36.709 --rc geninfo_all_blocks=1 00:06:36.709 --rc geninfo_unexecuted_blocks=1 00:06:36.709 00:06:36.709 ' 00:06:36.709 13:25:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.709 --rc genhtml_branch_coverage=1 00:06:36.709 --rc genhtml_function_coverage=1 00:06:36.710 --rc genhtml_legend=1 00:06:36.710 --rc geninfo_all_blocks=1 00:06:36.710 --rc geninfo_unexecuted_blocks=1 00:06:36.710 00:06:36.710 ' 00:06:36.710 13:25:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.710 --rc genhtml_branch_coverage=1 00:06:36.710 --rc genhtml_function_coverage=1 00:06:36.710 --rc genhtml_legend=1 00:06:36.710 --rc geninfo_all_blocks=1 00:06:36.710 --rc geninfo_unexecuted_blocks=1 00:06:36.710 00:06:36.710 ' 00:06:36.710 13:25:48 version -- app/version.sh@17 -- # get_header_version major 00:06:36.710 13:25:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # cut -f2 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.710 13:25:48 version -- app/version.sh@17 -- # major=25 00:06:36.710 13:25:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.710 13:25:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # cut -f2 00:06:36.710 13:25:48 version -- app/version.sh@18 -- # minor=1 00:06:36.710 13:25:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # cut -f2 00:06:36.710 13:25:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.710 13:25:48 version -- app/version.sh@19 -- # patch=0 00:06:36.710 13:25:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.710 13:25:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # cut -f2 00:06:36.710 13:25:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.710 13:25:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.710 13:25:48 version -- app/version.sh@22 -- # version=25.1 00:06:36.710 13:25:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.710 13:25:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:36.710 13:25:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:36.710 13:25:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.969 13:25:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:36.969 13:25:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:36.969 00:06:36.969 real 0m0.252s 00:06:36.969 user 0m0.157s 00:06:36.969 sys 0m0.137s 00:06:36.969 13:25:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.969 ************************************ 00:06:36.969 END TEST version 00:06:36.969 ************************************ 00:06:36.970 13:25:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.970 13:25:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:36.970 13:25:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:36.970 13:25:48 -- spdk/autotest.sh@194 -- # uname -s 00:06:36.970 13:25:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:36.970 13:25:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.970 13:25:48 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:36.970 13:25:48 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:36.970 13:25:48 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:36.970 13:25:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.970 13:25:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.970 13:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:36.970 ************************************ 00:06:36.970 START TEST spdk_dd 00:06:36.970 ************************************ 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:36.970 * Looking for test storage... 00:06:36.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.970 --rc genhtml_branch_coverage=1 00:06:36.970 --rc genhtml_function_coverage=1 00:06:36.970 --rc genhtml_legend=1 00:06:36.970 --rc geninfo_all_blocks=1 00:06:36.970 --rc geninfo_unexecuted_blocks=1 00:06:36.970 00:06:36.970 ' 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.970 --rc genhtml_branch_coverage=1 00:06:36.970 --rc genhtml_function_coverage=1 00:06:36.970 --rc genhtml_legend=1 00:06:36.970 --rc geninfo_all_blocks=1 00:06:36.970 --rc geninfo_unexecuted_blocks=1 00:06:36.970 00:06:36.970 ' 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.970 --rc genhtml_branch_coverage=1 00:06:36.970 --rc genhtml_function_coverage=1 00:06:36.970 --rc genhtml_legend=1 00:06:36.970 --rc geninfo_all_blocks=1 00:06:36.970 --rc geninfo_unexecuted_blocks=1 00:06:36.970 00:06:36.970 ' 00:06:36.970 13:25:48 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.970 --rc genhtml_branch_coverage=1 00:06:36.970 --rc genhtml_function_coverage=1 00:06:36.970 --rc genhtml_legend=1 00:06:36.970 --rc geninfo_all_blocks=1 00:06:36.970 --rc geninfo_unexecuted_blocks=1 00:06:36.970 00:06:36.970 ' 00:06:36.970 13:25:48 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.970 13:25:48 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.970 13:25:48 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.970 13:25:48 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.970 13:25:48 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.970 13:25:48 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:36.970 13:25:48 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.970 13:25:48 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:37.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:37.538 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:37.538 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:37.538 13:25:49 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:37.538 13:25:49 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:37.538 13:25:49 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:37.539 13:25:49 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:37.539 13:25:49 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.539 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:37.540 * spdk_dd linked to liburing 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:37.540 13:25:49 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.540 13:25:49 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:37.541 13:25:49 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:37.541 13:25:49 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:37.541 13:25:49 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:37.541 13:25:49 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:37.541 13:25:49 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:37.541 13:25:49 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:37.541 13:25:49 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:37.541 13:25:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:37.541 13:25:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.541 13:25:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:37.541 ************************************ 00:06:37.541 START TEST spdk_dd_basic_rw 00:06:37.541 ************************************ 00:06:37.541 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:37.541 * Looking for test storage... 00:06:37.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.799 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.800 --rc genhtml_branch_coverage=1 00:06:37.800 --rc genhtml_function_coverage=1 00:06:37.800 --rc genhtml_legend=1 00:06:37.800 --rc geninfo_all_blocks=1 00:06:37.800 --rc geninfo_unexecuted_blocks=1 00:06:37.800 00:06:37.800 ' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.800 --rc genhtml_branch_coverage=1 00:06:37.800 --rc genhtml_function_coverage=1 00:06:37.800 --rc genhtml_legend=1 00:06:37.800 --rc geninfo_all_blocks=1 00:06:37.800 --rc geninfo_unexecuted_blocks=1 00:06:37.800 00:06:37.800 ' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.800 --rc genhtml_branch_coverage=1 00:06:37.800 --rc genhtml_function_coverage=1 00:06:37.800 --rc genhtml_legend=1 00:06:37.800 --rc geninfo_all_blocks=1 00:06:37.800 --rc geninfo_unexecuted_blocks=1 00:06:37.800 00:06:37.800 ' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.800 --rc genhtml_branch_coverage=1 00:06:37.800 --rc genhtml_function_coverage=1 00:06:37.800 --rc genhtml_legend=1 00:06:37.800 --rc geninfo_all_blocks=1 00:06:37.800 --rc geninfo_unexecuted_blocks=1 00:06:37.800 00:06:37.800 ' 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:37.800 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:38.061 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:38.061 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.062 ************************************ 00:06:38.062 START TEST dd_bs_lt_native_bs 00:06:38.062 ************************************ 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.062 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.063 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.063 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.063 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.063 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:38.063 13:25:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:38.063 { 00:06:38.063 "subsystems": [ 00:06:38.063 { 00:06:38.063 "subsystem": "bdev", 00:06:38.063 "config": [ 00:06:38.063 { 00:06:38.063 "params": { 00:06:38.063 "trtype": "pcie", 00:06:38.063 "traddr": "0000:00:10.0", 00:06:38.063 "name": "Nvme0" 00:06:38.063 }, 00:06:38.063 "method": "bdev_nvme_attach_controller" 00:06:38.063 }, 00:06:38.063 { 00:06:38.063 "method": "bdev_wait_for_examine" 00:06:38.063 } 00:06:38.063 ] 00:06:38.063 } 00:06:38.063 ] 00:06:38.063 } 00:06:38.063 [2024-11-20 13:25:49.883612] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:38.063 [2024-11-20 13:25:49.883701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:06:38.320 [2024-11-20 13:25:50.033398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.320 [2024-11-20 13:25:50.106901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.320 [2024-11-20 13:25:50.170778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.579 [2024-11-20 13:25:50.293034] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:38.579 [2024-11-20 13:25:50.293134] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.579 [2024-11-20 13:25:50.425137] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.579 00:06:38.579 real 0m0.662s 00:06:38.579 user 0m0.447s 00:06:38.579 sys 0m0.172s 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.579 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:38.579 ************************************ 00:06:38.579 END TEST dd_bs_lt_native_bs 00:06:38.579 ************************************ 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.837 ************************************ 00:06:38.837 START TEST dd_rw 00:06:38.837 ************************************ 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:38.837 13:25:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.402 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:39.402 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.402 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.402 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.402 [2024-11-20 13:25:51.193703] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:39.402 [2024-11-20 13:25:51.194527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:06:39.402 { 00:06:39.402 "subsystems": [ 00:06:39.402 { 00:06:39.402 "subsystem": "bdev", 00:06:39.402 "config": [ 00:06:39.402 { 00:06:39.402 "params": { 00:06:39.402 "trtype": "pcie", 00:06:39.402 "traddr": "0000:00:10.0", 00:06:39.402 "name": "Nvme0" 00:06:39.402 }, 00:06:39.402 "method": "bdev_nvme_attach_controller" 00:06:39.402 }, 00:06:39.402 { 00:06:39.402 "method": "bdev_wait_for_examine" 00:06:39.402 } 00:06:39.402 ] 00:06:39.402 } 00:06:39.402 ] 00:06:39.402 } 00:06:39.402 [2024-11-20 13:25:51.342894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.660 [2024-11-20 13:25:51.433921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.660 [2024-11-20 13:25:51.494074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.660  [2024-11-20T13:25:51.876Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:39.919 00:06:39.919 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:39.919 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:39.919 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.919 13:25:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.177 { 00:06:40.177 "subsystems": [ 00:06:40.177 { 00:06:40.177 "subsystem": "bdev", 00:06:40.177 "config": [ 00:06:40.177 { 00:06:40.177 "params": { 00:06:40.177 "trtype": "pcie", 00:06:40.177 "traddr": "0000:00:10.0", 00:06:40.177 "name": "Nvme0" 00:06:40.177 }, 00:06:40.177 "method": "bdev_nvme_attach_controller" 00:06:40.177 }, 00:06:40.177 { 00:06:40.177 "method": "bdev_wait_for_examine" 00:06:40.177 } 00:06:40.177 ] 00:06:40.177 } 00:06:40.177 ] 00:06:40.177 } 00:06:40.177 [2024-11-20 13:25:51.889782] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:40.177 [2024-11-20 13:25:51.889959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59797 ] 00:06:40.177 [2024-11-20 13:25:52.041071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.177 [2024-11-20 13:25:52.109707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.435 [2024-11-20 13:25:52.167934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.435  [2024-11-20T13:25:52.650Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:40.693 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.693 13:25:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.693 [2024-11-20 13:25:52.542029] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:40.693 [2024-11-20 13:25:52.542343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59818 ] 00:06:40.693 { 00:06:40.693 "subsystems": [ 00:06:40.693 { 00:06:40.693 "subsystem": "bdev", 00:06:40.693 "config": [ 00:06:40.693 { 00:06:40.693 "params": { 00:06:40.693 "trtype": "pcie", 00:06:40.693 "traddr": "0000:00:10.0", 00:06:40.693 "name": "Nvme0" 00:06:40.693 }, 00:06:40.693 "method": "bdev_nvme_attach_controller" 00:06:40.693 }, 00:06:40.693 { 00:06:40.693 "method": "bdev_wait_for_examine" 00:06:40.693 } 00:06:40.693 ] 00:06:40.693 } 00:06:40.693 ] 00:06:40.693 } 00:06:40.949 [2024-11-20 13:25:52.687565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.949 [2024-11-20 13:25:52.756380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.949 [2024-11-20 13:25:52.814621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.207  [2024-11-20T13:25:53.164Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:41.207 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:41.207 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.158 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:42.158 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:42.158 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.159 13:25:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.159 [2024-11-20 13:25:53.853456] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:42.159 [2024-11-20 13:25:53.853841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:06:42.159 { 00:06:42.159 "subsystems": [ 00:06:42.159 { 00:06:42.159 "subsystem": "bdev", 00:06:42.159 "config": [ 00:06:42.159 { 00:06:42.159 "params": { 00:06:42.159 "trtype": "pcie", 00:06:42.159 "traddr": "0000:00:10.0", 00:06:42.159 "name": "Nvme0" 00:06:42.159 }, 00:06:42.159 "method": "bdev_nvme_attach_controller" 00:06:42.159 }, 00:06:42.159 { 00:06:42.159 "method": "bdev_wait_for_examine" 00:06:42.159 } 00:06:42.159 ] 00:06:42.159 } 00:06:42.159 ] 00:06:42.159 } 00:06:42.159 [2024-11-20 13:25:54.003816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.159 [2024-11-20 13:25:54.070295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.417 [2024-11-20 13:25:54.127440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.417  [2024-11-20T13:25:54.631Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:42.674 00:06:42.674 13:25:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:42.674 13:25:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:42.674 13:25:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.674 13:25:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.674 [2024-11-20 13:25:54.510320] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:42.674 [2024-11-20 13:25:54.510623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59856 ] 00:06:42.674 { 00:06:42.674 "subsystems": [ 00:06:42.674 { 00:06:42.674 "subsystem": "bdev", 00:06:42.674 "config": [ 00:06:42.674 { 00:06:42.674 "params": { 00:06:42.674 "trtype": "pcie", 00:06:42.674 "traddr": "0000:00:10.0", 00:06:42.674 "name": "Nvme0" 00:06:42.674 }, 00:06:42.674 "method": "bdev_nvme_attach_controller" 00:06:42.674 }, 00:06:42.674 { 00:06:42.674 "method": "bdev_wait_for_examine" 00:06:42.675 } 00:06:42.675 ] 00:06:42.675 } 00:06:42.675 ] 00:06:42.675 } 00:06:42.932 [2024-11-20 13:25:54.657521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.932 [2024-11-20 13:25:54.734327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.932 [2024-11-20 13:25:54.796317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.190  [2024-11-20T13:25:55.147Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:43.190 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.190 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.448 { 00:06:43.448 "subsystems": [ 00:06:43.448 { 00:06:43.448 "subsystem": "bdev", 00:06:43.448 "config": [ 00:06:43.448 { 00:06:43.448 "params": { 00:06:43.448 "trtype": "pcie", 00:06:43.448 "traddr": "0000:00:10.0", 00:06:43.448 "name": "Nvme0" 00:06:43.448 }, 00:06:43.448 "method": "bdev_nvme_attach_controller" 00:06:43.448 }, 00:06:43.448 { 00:06:43.448 "method": "bdev_wait_for_examine" 00:06:43.448 } 00:06:43.448 ] 00:06:43.448 } 00:06:43.448 ] 00:06:43.448 } 00:06:43.448 [2024-11-20 13:25:55.196430] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:43.448 [2024-11-20 13:25:55.196906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59866 ] 00:06:43.448 [2024-11-20 13:25:55.351081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.707 [2024-11-20 13:25:55.423647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.707 [2024-11-20 13:25:55.485646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.707  [2024-11-20T13:25:55.922Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:43.965 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:43.965 13:25:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.532 13:25:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:44.532 13:25:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:44.532 13:25:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.532 13:25:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.789 { 00:06:44.789 "subsystems": [ 00:06:44.789 { 00:06:44.789 "subsystem": "bdev", 00:06:44.789 "config": [ 00:06:44.789 { 00:06:44.789 "params": { 00:06:44.789 "trtype": "pcie", 00:06:44.789 "traddr": "0000:00:10.0", 00:06:44.790 "name": "Nvme0" 00:06:44.790 }, 00:06:44.790 "method": "bdev_nvme_attach_controller" 00:06:44.790 }, 00:06:44.790 { 00:06:44.790 "method": "bdev_wait_for_examine" 00:06:44.790 } 00:06:44.790 ] 00:06:44.790 } 00:06:44.790 ] 00:06:44.790 } 00:06:44.790 [2024-11-20 13:25:56.506811] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:44.790 [2024-11-20 13:25:56.506932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59890 ] 00:06:44.790 [2024-11-20 13:25:56.658901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.790 [2024-11-20 13:25:56.719251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.048 [2024-11-20 13:25:56.775990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.048  [2024-11-20T13:25:57.263Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:45.306 00:06:45.306 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.306 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:45.306 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.306 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.306 { 00:06:45.306 "subsystems": [ 00:06:45.306 { 00:06:45.306 "subsystem": "bdev", 00:06:45.306 "config": [ 00:06:45.306 { 00:06:45.306 "params": { 00:06:45.306 "trtype": "pcie", 00:06:45.306 "traddr": "0000:00:10.0", 00:06:45.306 "name": "Nvme0" 00:06:45.306 }, 00:06:45.306 "method": "bdev_nvme_attach_controller" 00:06:45.306 }, 00:06:45.306 { 00:06:45.306 "method": "bdev_wait_for_examine" 00:06:45.306 } 00:06:45.306 ] 00:06:45.306 } 00:06:45.306 ] 00:06:45.306 } 00:06:45.306 [2024-11-20 13:25:57.147154] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:45.306 [2024-11-20 13:25:57.147280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:06:45.564 [2024-11-20 13:25:57.293522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.564 [2024-11-20 13:25:57.349377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.564 [2024-11-20 13:25:57.407794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.823  [2024-11-20T13:25:57.780Z] Copying: 56/56 [kB] (average 18 MBps) 00:06:45.823 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:45.823 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:45.824 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:45.824 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:45.824 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.824 13:25:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.082 { 00:06:46.082 "subsystems": [ 00:06:46.082 { 00:06:46.082 "subsystem": "bdev", 00:06:46.082 "config": [ 00:06:46.082 { 00:06:46.082 "params": { 00:06:46.082 "trtype": "pcie", 00:06:46.082 "traddr": "0000:00:10.0", 00:06:46.082 "name": "Nvme0" 00:06:46.082 }, 00:06:46.082 "method": "bdev_nvme_attach_controller" 00:06:46.082 }, 00:06:46.082 { 00:06:46.082 "method": "bdev_wait_for_examine" 00:06:46.082 } 00:06:46.082 ] 00:06:46.082 } 00:06:46.082 ] 00:06:46.082 } 00:06:46.082 [2024-11-20 13:25:57.797551] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:46.082 [2024-11-20 13:25:57.797896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:06:46.082 [2024-11-20 13:25:57.945585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.082 [2024-11-20 13:25:58.002075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.340 [2024-11-20 13:25:58.059170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.340  [2024-11-20T13:25:58.555Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:46.598 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:46.598 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:46.599 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.164 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:47.164 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.164 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.164 13:25:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.164 [2024-11-20 13:25:59.016674] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:47.164 [2024-11-20 13:25:59.016794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:06:47.164 { 00:06:47.164 "subsystems": [ 00:06:47.164 { 00:06:47.164 "subsystem": "bdev", 00:06:47.164 "config": [ 00:06:47.164 { 00:06:47.164 "params": { 00:06:47.164 "trtype": "pcie", 00:06:47.164 "traddr": "0000:00:10.0", 00:06:47.164 "name": "Nvme0" 00:06:47.164 }, 00:06:47.164 "method": "bdev_nvme_attach_controller" 00:06:47.164 }, 00:06:47.164 { 00:06:47.164 "method": "bdev_wait_for_examine" 00:06:47.164 } 00:06:47.164 ] 00:06:47.164 } 00:06:47.164 ] 00:06:47.164 } 00:06:47.421 [2024-11-20 13:25:59.168078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.421 [2024-11-20 13:25:59.228973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.421 [2024-11-20 13:25:59.289126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.678  [2024-11-20T13:25:59.635Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:47.678 00:06:47.678 13:25:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:47.678 13:25:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:47.678 13:25:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.678 13:25:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.936 [2024-11-20 13:25:59.664841] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:47.936 [2024-11-20 13:25:59.664945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:06:47.936 { 00:06:47.936 "subsystems": [ 00:06:47.936 { 00:06:47.936 "subsystem": "bdev", 00:06:47.936 "config": [ 00:06:47.936 { 00:06:47.936 "params": { 00:06:47.936 "trtype": "pcie", 00:06:47.936 "traddr": "0000:00:10.0", 00:06:47.936 "name": "Nvme0" 00:06:47.936 }, 00:06:47.936 "method": "bdev_nvme_attach_controller" 00:06:47.936 }, 00:06:47.936 { 00:06:47.936 "method": "bdev_wait_for_examine" 00:06:47.936 } 00:06:47.936 ] 00:06:47.936 } 00:06:47.936 ] 00:06:47.936 } 00:06:47.936 [2024-11-20 13:25:59.815670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.936 [2024-11-20 13:25:59.882381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.193 [2024-11-20 13:25:59.941625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.193  [2024-11-20T13:26:00.409Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:48.452 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.452 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.452 [2024-11-20 13:26:00.307797] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:48.452 [2024-11-20 13:26:00.308823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:06:48.452 { 00:06:48.452 "subsystems": [ 00:06:48.452 { 00:06:48.452 "subsystem": "bdev", 00:06:48.452 "config": [ 00:06:48.452 { 00:06:48.452 "params": { 00:06:48.452 "trtype": "pcie", 00:06:48.452 "traddr": "0000:00:10.0", 00:06:48.452 "name": "Nvme0" 00:06:48.452 }, 00:06:48.452 "method": "bdev_nvme_attach_controller" 00:06:48.452 }, 00:06:48.452 { 00:06:48.452 "method": "bdev_wait_for_examine" 00:06:48.452 } 00:06:48.452 ] 00:06:48.452 } 00:06:48.452 ] 00:06:48.452 } 00:06:48.711 [2024-11-20 13:26:00.453212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.711 [2024-11-20 13:26:00.514078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.711 [2024-11-20 13:26:00.571330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.970  [2024-11-20T13:26:00.927Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:48.970 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:48.970 13:26:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.537 13:26:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:49.537 13:26:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.537 13:26:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.537 13:26:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.537 [2024-11-20 13:26:01.447435] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:49.537 [2024-11-20 13:26:01.447741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59992 ] 00:06:49.537 { 00:06:49.537 "subsystems": [ 00:06:49.537 { 00:06:49.537 "subsystem": "bdev", 00:06:49.537 "config": [ 00:06:49.537 { 00:06:49.537 "params": { 00:06:49.537 "trtype": "pcie", 00:06:49.537 "traddr": "0000:00:10.0", 00:06:49.537 "name": "Nvme0" 00:06:49.537 }, 00:06:49.537 "method": "bdev_nvme_attach_controller" 00:06:49.537 }, 00:06:49.537 { 00:06:49.537 "method": "bdev_wait_for_examine" 00:06:49.537 } 00:06:49.537 ] 00:06:49.537 } 00:06:49.537 ] 00:06:49.537 } 00:06:49.815 [2024-11-20 13:26:01.597044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.815 [2024-11-20 13:26:01.669649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.815 [2024-11-20 13:26:01.730405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.092  [2024-11-20T13:26:02.307Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:50.350 00:06:50.350 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.350 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:50.350 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.350 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.350 [2024-11-20 13:26:02.119383] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:50.350 [2024-11-20 13:26:02.119500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60011 ] 00:06:50.350 { 00:06:50.350 "subsystems": [ 00:06:50.350 { 00:06:50.350 "subsystem": "bdev", 00:06:50.350 "config": [ 00:06:50.350 { 00:06:50.350 "params": { 00:06:50.350 "trtype": "pcie", 00:06:50.350 "traddr": "0000:00:10.0", 00:06:50.350 "name": "Nvme0" 00:06:50.350 }, 00:06:50.350 "method": "bdev_nvme_attach_controller" 00:06:50.350 }, 00:06:50.350 { 00:06:50.350 "method": "bdev_wait_for_examine" 00:06:50.350 } 00:06:50.350 ] 00:06:50.350 } 00:06:50.350 ] 00:06:50.350 } 00:06:50.350 [2024-11-20 13:26:02.264281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.611 [2024-11-20 13:26:02.329615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.611 [2024-11-20 13:26:02.386605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.611  [2024-11-20T13:26:02.828Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:50.871 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.871 13:26:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.871 [2024-11-20 13:26:02.759499] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:50.871 [2024-11-20 13:26:02.760282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:06:50.871 { 00:06:50.871 "subsystems": [ 00:06:50.871 { 00:06:50.871 "subsystem": "bdev", 00:06:50.871 "config": [ 00:06:50.871 { 00:06:50.871 "params": { 00:06:50.871 "trtype": "pcie", 00:06:50.871 "traddr": "0000:00:10.0", 00:06:50.871 "name": "Nvme0" 00:06:50.871 }, 00:06:50.871 "method": "bdev_nvme_attach_controller" 00:06:50.871 }, 00:06:50.871 { 00:06:50.871 "method": "bdev_wait_for_examine" 00:06:50.871 } 00:06:50.871 ] 00:06:50.871 } 00:06:50.871 ] 00:06:50.871 } 00:06:51.129 [2024-11-20 13:26:02.904606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.129 [2024-11-20 13:26:02.968444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.129 [2024-11-20 13:26:03.030542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.388  [2024-11-20T13:26:03.603Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:51.646 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:51.646 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.213 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:52.213 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:52.213 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.213 13:26:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.213 [2024-11-20 13:26:03.963104] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:52.213 [2024-11-20 13:26:03.963244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:06:52.213 { 00:06:52.213 "subsystems": [ 00:06:52.213 { 00:06:52.213 "subsystem": "bdev", 00:06:52.213 "config": [ 00:06:52.213 { 00:06:52.213 "params": { 00:06:52.213 "trtype": "pcie", 00:06:52.213 "traddr": "0000:00:10.0", 00:06:52.213 "name": "Nvme0" 00:06:52.213 }, 00:06:52.213 "method": "bdev_nvme_attach_controller" 00:06:52.213 }, 00:06:52.213 { 00:06:52.213 "method": "bdev_wait_for_examine" 00:06:52.213 } 00:06:52.213 ] 00:06:52.213 } 00:06:52.213 ] 00:06:52.213 } 00:06:52.213 [2024-11-20 13:26:04.117532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.471 [2024-11-20 13:26:04.190443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.471 [2024-11-20 13:26:04.254897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.471  [2024-11-20T13:26:04.687Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:52.730 00:06:52.730 13:26:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:52.730 13:26:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.730 13:26:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.730 13:26:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.730 [2024-11-20 13:26:04.631003] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:52.730 [2024-11-20 13:26:04.631102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:06:52.730 { 00:06:52.730 "subsystems": [ 00:06:52.730 { 00:06:52.730 "subsystem": "bdev", 00:06:52.730 "config": [ 00:06:52.730 { 00:06:52.730 "params": { 00:06:52.730 "trtype": "pcie", 00:06:52.730 "traddr": "0000:00:10.0", 00:06:52.730 "name": "Nvme0" 00:06:52.730 }, 00:06:52.730 "method": "bdev_nvme_attach_controller" 00:06:52.730 }, 00:06:52.730 { 00:06:52.730 "method": "bdev_wait_for_examine" 00:06:52.730 } 00:06:52.730 ] 00:06:52.730 } 00:06:52.730 ] 00:06:52.730 } 00:06:52.987 [2024-11-20 13:26:04.780572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.987 [2024-11-20 13:26:04.843941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.987 [2024-11-20 13:26:04.905699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.246  [2024-11-20T13:26:05.460Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:53.503 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.503 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:53.504 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.504 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.504 { 00:06:53.504 "subsystems": [ 00:06:53.504 { 00:06:53.504 "subsystem": "bdev", 00:06:53.504 "config": [ 00:06:53.504 { 00:06:53.504 "params": { 00:06:53.504 "trtype": "pcie", 00:06:53.504 "traddr": "0000:00:10.0", 00:06:53.504 "name": "Nvme0" 00:06:53.504 }, 00:06:53.504 "method": "bdev_nvme_attach_controller" 00:06:53.504 }, 00:06:53.504 { 00:06:53.504 "method": "bdev_wait_for_examine" 00:06:53.504 } 00:06:53.504 ] 00:06:53.504 } 00:06:53.504 ] 00:06:53.504 } 00:06:53.504 [2024-11-20 13:26:05.301693] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:53.504 [2024-11-20 13:26:05.301826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60080 ] 00:06:53.504 [2024-11-20 13:26:05.453481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.762 [2024-11-20 13:26:05.522425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.762 [2024-11-20 13:26:05.578974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.762  [2024-11-20T13:26:05.977Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:54.020 00:06:54.020 00:06:54.020 real 0m15.357s 00:06:54.020 user 0m11.243s 00:06:54.020 sys 0m5.815s 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.020 ************************************ 00:06:54.020 END TEST dd_rw 00:06:54.020 ************************************ 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.020 ************************************ 00:06:54.020 START TEST dd_rw_offset 00:06:54.020 ************************************ 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:54.020 13:26:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=pbibrx2xn4ovxmc7t3na08j0pebddmkj9oydck356j32ew4mxs60f8krc8ke1mxbywo5cjp8tx3wkntst55l4xrccletbvwyih49rmdg066t64fkni9dp00t1we8gum38bmlp754d6ce7qvejx5bk2wranilefwk0gfiewbiq98902mttdr075wv4gb6fdq8iryc7buiavu2lbro9g81wj3k0sxqjqzxgtgr1mfyja5rnoocxp64vgq0hdl4q29r83ut59jjbyagetew3q3g1zwncwynsmpnag4lccz91gz6u7tu45z2xfkkd9o6fbv0i27l0lhvifz3hs0cvkul2xw79t1uvrzt5hgydfxtf6kvphe3xvhx2ydxw2h2z3da33vyovxp15khxfxhtdbf9cs0u3whpr8bsnqf6vee67jp3erbvtp9toz7fzupjdg5qhhc92xszx636ca40dyul395tyhcqd8w4h79iqfj208dqdqnry7fu3o6apfjuzsel29z0ujhmpejfiwk1sqauv1m2t0bvqldhe3oa8fjrmikd0md6xngjamvwydsgg1vpzaynrwl5swa96wwrfgj4mmyjx2xjsnuezorfx8n6qdcgdx2l3liza1ff3fb5aufshnurrzoezo58084gy9zdwiyj0htq41faims9y0338ez6bq5mu273a7n0byb16q0gztdf8ao999jenbh5qinox1epqmmowwf4ojec62jiaa2qay047golo2w85mu68v9s5re8lq8r76tuxtrbrsfq8mobc05g2gqarxa3plod8tc2f08w1vn3ywl6ohc19uamu5n0xchrb3nz8pk9ns0p04i4suzlushmjwt6leoaaynjethntftete7a5zi6i488xlpt24zcgvbvgso2vajznffuyy7v8jl2r5d6un8ljn0x3jg01re4f472oafe8rjh3lbmfnj0uitmtxt88gq3ykp6ofib7k2hat6det8htsjkz6rxpp4mjqd5zkxvcw4dm5766x4hhdwfv5udkistumlavvyq2vp82r0layzk8oq4qdf9pnj60uw9dvve4hgkfghhp35c8lau8ugemi7e7951bgmc3w9fqyyh975yuz0gmtdjjbxsd6b4nlrz6eaf1fcjbc1dqmilaz6fno7cmstfbmgzee1so1y3eswbojb26t5bah6qihkaztcx70u4a1dqg9kjvaodj4xxk0n9e22pjvzsjg70mkd6k2e1q4qdvagktb3jf07hzq2tyyspnqdxn9sg3xvcr02eba3efnabml0zxxrtwror6i35523vwz8cexk4yifssxezf4urc7lnxiq78bbhis08u36dftp0krnb48z4qs6j5fbwb1aaydh8uiikhyf6jmsa6z7gxeldshjc8ei56n631b02krzxvllhr3q3gk6ee9rwhcyvi5vlorkwopcff9uj2xfl0g1hxjo9o5m3v9mkqz0ok8mlsjzf9jddgdczokqrsblo89gyn4mpzlqn67etyzfisflw2jsr0kabgctriv23srovook9vgj72pgvnh5mydz1rxw6ml8wrbrwji5nbq5140tv4nqcaqm6kb6fyzvd0893alnz0goflh2jmdo8qjslg3yv78sny8t3dqje9bys50ig6d8pbndomlajvo91ut37xtymre1qwycmhwlrjq8xjbgs9td5dmwygeyrghhmhulda663c17cslyp9exz0hevrub8ebsjbcs6rb8nkrz5fu0hjizln976tjldx1uweoet7srsjpel6ji0al47w621hdk60ue3pj5g1pc8v91g05zfeuew23pxghp7of5yowf82471dlmqvhzwut8fyvlbws1cd2cl3rlrzvzwlgoe22i2thj071blcvsma03kybnnu04z02s14l6cfan656shdlfftzrkvcl2qyhv4diponny0n7vijw42mze64yvf5vtipyxr1pv5xi5tiz57gzbvtqphsxp0vi4bybchtfhftkg2krtg6ufe3c4g96hcd0umw0ug6yylksrnk0bgdbs70unj51ph5440lapnl5qz223r22cpfqcugqe985f17yncg6riti2dsjzwu3lrc6au21trikjhudud6gqgqncn1nwj8q5ubc6qvy6c3mk115dvvq9u4a5b1n5xkxjoaa2ttrqynbzcle8ukz4mvhesojddi1nbgpuyvhs20ux7ndw0zw9mi9glwz6tmys2bulbfb4pab4m379ywe1sktlbt29zxezz5igy61f4f9rcef7ned5v28o7ld8ssyec5956ci54749auosfd4lwuz4jbfm26fnb4e8nmhebzhe83q7nm5mnzi7y4uuhn2ordizovuq4nqk1gtmw8aw9glwc8nrd5su2v7trggm1303ox78ton5397ht5wbroavcftr4cmomv6naosfqkywb5blnem7e1qrn84c6v8pbnh19efjtyy6l0cyqm48nmfm2ngz3x3od99chj31s5t1aqmxzx0jjg9ov5i78ry5liukxmw72z3sx48u31fofd7wq0rpgj6nvxmqn4uqnnm47j74zs5cf3ljtm2i57u2mifx0pkvpuixlr1odle58ci8a3y7r3mxxvc00gqty95reguirpgc8qymhgolhy42zywqeblx3o05ziff1mhss46i355glg0lpunn6l8yvwlh6kjqdpuud94f2db11lmgbh5qf0f6751qpxfkfbkynokdxb7xeg6rtln2bb80fmfku9jk68nmuwjaopmoi7xrz9d0mpc3hyl7a316wdguzdsxwgmth0pdhqgj3lyqc7tdlkfb1mhe1m4znph9ly0clcht5z00bwpdq0yoe4tvy9s5ls3ntvvbcsg94i9kwhjf59mnbdcvtwdgi21v3w4jto1sqig88b0jfytjrazmurcyed25ac6vxjdk12s8vt9fjm2dcde06yi8rb3z6uqcofafa5f00ss4pmhigvx81iy22wzp1cbdeoatlio6b7zg80hkvmkx0wyd2iigi18ln5pahrt95qk0eukj9s9wu48377tfppua0mz9eci63yrh2gw5fsib0w1eoible6mxmi9bcag3anokqhjmrwsho2usign5de0ii786c8095ruznvlw25h9k2tke16p516pe9x7o4twtm23dlqy73duvsi9q0kagso3k7m6j73ulbwtr16ma6ryl98devyvns1jcdf1izaarxx4epzrwtkdp45svnd4ogpboatcjnwzy001kbpdllrb4pmcn7bawvfob9gpv9w8udzctruar1t0ssrjcv07q0yb9cqvapwzmtenlkjvyd7d90kosqp8pt6i21wlx3kc3c4aickg7l70359yti7o8jtzmjzefa1jydi0yqtznqs0aybe9llar6u5ykny63tix0umglpgyqgr5gy1iy9jgy57abebygx75vv1itfe7frb52yha7y1q3h8ycbql20vezleud3z3ujiqkewx7r91aw1if071h1xurcp921bi32t9y52twl8xja91zkj7imyvygd14c8f3f11e42fqajjodl78axiif2gcbtxdzjwdrayl1njpuv0quuqybd2gdn36h17gvl9u5j17p10of3xu612o6uqynpvey1e0mq5tpwp9hq92k9cbs35c6zpsl4ybkp22v5wdfd7abgrw74ji86a108c0l3zrf18xx9dct2j4nueyit48e869ku07xk7yosotqk8sggsiz5lw1f7md008pyqmako4de6ekmdy06l2xzd5y03i2cvgx80vqyat32z855xfom5jgfhy7euvncugbr35ch4kl7wkee3vablxy2ykpaxpy5pnursivpbxeceytg58ux4h8r26b8c4yy90g50fmv68wmvlzewtlr2zooe621189xfciozfznsy7jdjzghuvixsr6t8phfhp35xcb0d5t90w8x5pdg5ew25zsacqqy3qb201sn75t7i6pq406x48gsbultacez6g0j6hs29f13lsmgy0nuzpwx2oj0w13o7bzkva 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:54.279 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:54.279 [2024-11-20 13:26:06.062329] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:54.279 [2024-11-20 13:26:06.062457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60116 ] 00:06:54.279 { 00:06:54.279 "subsystems": [ 00:06:54.279 { 00:06:54.279 "subsystem": "bdev", 00:06:54.279 "config": [ 00:06:54.279 { 00:06:54.279 "params": { 00:06:54.279 "trtype": "pcie", 00:06:54.279 "traddr": "0000:00:10.0", 00:06:54.279 "name": "Nvme0" 00:06:54.279 }, 00:06:54.279 "method": "bdev_nvme_attach_controller" 00:06:54.279 }, 00:06:54.279 { 00:06:54.279 "method": "bdev_wait_for_examine" 00:06:54.279 } 00:06:54.279 ] 00:06:54.279 } 00:06:54.279 ] 00:06:54.279 } 00:06:54.279 [2024-11-20 13:26:06.209160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.537 [2024-11-20 13:26:06.272538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.537 [2024-11-20 13:26:06.331393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.537  [2024-11-20T13:26:06.751Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:54.794 00:06:54.794 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:54.794 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:54.794 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:54.794 13:26:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:54.794 { 00:06:54.794 "subsystems": [ 00:06:54.794 { 00:06:54.794 "subsystem": "bdev", 00:06:54.794 "config": [ 00:06:54.794 { 00:06:54.794 "params": { 00:06:54.794 "trtype": "pcie", 00:06:54.794 "traddr": "0000:00:10.0", 00:06:54.794 "name": "Nvme0" 00:06:54.794 }, 00:06:54.794 "method": "bdev_nvme_attach_controller" 00:06:54.794 }, 00:06:54.794 { 00:06:54.794 "method": "bdev_wait_for_examine" 00:06:54.794 } 00:06:54.794 ] 00:06:54.794 } 00:06:54.794 ] 00:06:54.794 } 00:06:54.794 [2024-11-20 13:26:06.708913] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:54.794 [2024-11-20 13:26:06.709063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60126 ] 00:06:55.052 [2024-11-20 13:26:06.859886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.052 [2024-11-20 13:26:06.920214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.052 [2024-11-20 13:26:06.982331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.311  [2024-11-20T13:26:07.527Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:55.570 00:06:55.570 13:26:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ pbibrx2xn4ovxmc7t3na08j0pebddmkj9oydck356j32ew4mxs60f8krc8ke1mxbywo5cjp8tx3wkntst55l4xrccletbvwyih49rmdg066t64fkni9dp00t1we8gum38bmlp754d6ce7qvejx5bk2wranilefwk0gfiewbiq98902mttdr075wv4gb6fdq8iryc7buiavu2lbro9g81wj3k0sxqjqzxgtgr1mfyja5rnoocxp64vgq0hdl4q29r83ut59jjbyagetew3q3g1zwncwynsmpnag4lccz91gz6u7tu45z2xfkkd9o6fbv0i27l0lhvifz3hs0cvkul2xw79t1uvrzt5hgydfxtf6kvphe3xvhx2ydxw2h2z3da33vyovxp15khxfxhtdbf9cs0u3whpr8bsnqf6vee67jp3erbvtp9toz7fzupjdg5qhhc92xszx636ca40dyul395tyhcqd8w4h79iqfj208dqdqnry7fu3o6apfjuzsel29z0ujhmpejfiwk1sqauv1m2t0bvqldhe3oa8fjrmikd0md6xngjamvwydsgg1vpzaynrwl5swa96wwrfgj4mmyjx2xjsnuezorfx8n6qdcgdx2l3liza1ff3fb5aufshnurrzoezo58084gy9zdwiyj0htq41faims9y0338ez6bq5mu273a7n0byb16q0gztdf8ao999jenbh5qinox1epqmmowwf4ojec62jiaa2qay047golo2w85mu68v9s5re8lq8r76tuxtrbrsfq8mobc05g2gqarxa3plod8tc2f08w1vn3ywl6ohc19uamu5n0xchrb3nz8pk9ns0p04i4suzlushmjwt6leoaaynjethntftete7a5zi6i488xlpt24zcgvbvgso2vajznffuyy7v8jl2r5d6un8ljn0x3jg01re4f472oafe8rjh3lbmfnj0uitmtxt88gq3ykp6ofib7k2hat6det8htsjkz6rxpp4mjqd5zkxvcw4dm5766x4hhdwfv5udkistumlavvyq2vp82r0layzk8oq4qdf9pnj60uw9dvve4hgkfghhp35c8lau8ugemi7e7951bgmc3w9fqyyh975yuz0gmtdjjbxsd6b4nlrz6eaf1fcjbc1dqmilaz6fno7cmstfbmgzee1so1y3eswbojb26t5bah6qihkaztcx70u4a1dqg9kjvaodj4xxk0n9e22pjvzsjg70mkd6k2e1q4qdvagktb3jf07hzq2tyyspnqdxn9sg3xvcr02eba3efnabml0zxxrtwror6i35523vwz8cexk4yifssxezf4urc7lnxiq78bbhis08u36dftp0krnb48z4qs6j5fbwb1aaydh8uiikhyf6jmsa6z7gxeldshjc8ei56n631b02krzxvllhr3q3gk6ee9rwhcyvi5vlorkwopcff9uj2xfl0g1hxjo9o5m3v9mkqz0ok8mlsjzf9jddgdczokqrsblo89gyn4mpzlqn67etyzfisflw2jsr0kabgctriv23srovook9vgj72pgvnh5mydz1rxw6ml8wrbrwji5nbq5140tv4nqcaqm6kb6fyzvd0893alnz0goflh2jmdo8qjslg3yv78sny8t3dqje9bys50ig6d8pbndomlajvo91ut37xtymre1qwycmhwlrjq8xjbgs9td5dmwygeyrghhmhulda663c17cslyp9exz0hevrub8ebsjbcs6rb8nkrz5fu0hjizln976tjldx1uweoet7srsjpel6ji0al47w621hdk60ue3pj5g1pc8v91g05zfeuew23pxghp7of5yowf82471dlmqvhzwut8fyvlbws1cd2cl3rlrzvzwlgoe22i2thj071blcvsma03kybnnu04z02s14l6cfan656shdlfftzrkvcl2qyhv4diponny0n7vijw42mze64yvf5vtipyxr1pv5xi5tiz57gzbvtqphsxp0vi4bybchtfhftkg2krtg6ufe3c4g96hcd0umw0ug6yylksrnk0bgdbs70unj51ph5440lapnl5qz223r22cpfqcugqe985f17yncg6riti2dsjzwu3lrc6au21trikjhudud6gqgqncn1nwj8q5ubc6qvy6c3mk115dvvq9u4a5b1n5xkxjoaa2ttrqynbzcle8ukz4mvhesojddi1nbgpuyvhs20ux7ndw0zw9mi9glwz6tmys2bulbfb4pab4m379ywe1sktlbt29zxezz5igy61f4f9rcef7ned5v28o7ld8ssyec5956ci54749auosfd4lwuz4jbfm26fnb4e8nmhebzhe83q7nm5mnzi7y4uuhn2ordizovuq4nqk1gtmw8aw9glwc8nrd5su2v7trggm1303ox78ton5397ht5wbroavcftr4cmomv6naosfqkywb5blnem7e1qrn84c6v8pbnh19efjtyy6l0cyqm48nmfm2ngz3x3od99chj31s5t1aqmxzx0jjg9ov5i78ry5liukxmw72z3sx48u31fofd7wq0rpgj6nvxmqn4uqnnm47j74zs5cf3ljtm2i57u2mifx0pkvpuixlr1odle58ci8a3y7r3mxxvc00gqty95reguirpgc8qymhgolhy42zywqeblx3o05ziff1mhss46i355glg0lpunn6l8yvwlh6kjqdpuud94f2db11lmgbh5qf0f6751qpxfkfbkynokdxb7xeg6rtln2bb80fmfku9jk68nmuwjaopmoi7xrz9d0mpc3hyl7a316wdguzdsxwgmth0pdhqgj3lyqc7tdlkfb1mhe1m4znph9ly0clcht5z00bwpdq0yoe4tvy9s5ls3ntvvbcsg94i9kwhjf59mnbdcvtwdgi21v3w4jto1sqig88b0jfytjrazmurcyed25ac6vxjdk12s8vt9fjm2dcde06yi8rb3z6uqcofafa5f00ss4pmhigvx81iy22wzp1cbdeoatlio6b7zg80hkvmkx0wyd2iigi18ln5pahrt95qk0eukj9s9wu48377tfppua0mz9eci63yrh2gw5fsib0w1eoible6mxmi9bcag3anokqhjmrwsho2usign5de0ii786c8095ruznvlw25h9k2tke16p516pe9x7o4twtm23dlqy73duvsi9q0kagso3k7m6j73ulbwtr16ma6ryl98devyvns1jcdf1izaarxx4epzrwtkdp45svnd4ogpboatcjnwzy001kbpdllrb4pmcn7bawvfob9gpv9w8udzctruar1t0ssrjcv07q0yb9cqvapwzmtenlkjvyd7d90kosqp8pt6i21wlx3kc3c4aickg7l70359yti7o8jtzmjzefa1jydi0yqtznqs0aybe9llar6u5ykny63tix0umglpgyqgr5gy1iy9jgy57abebygx75vv1itfe7frb52yha7y1q3h8ycbql20vezleud3z3ujiqkewx7r91aw1if071h1xurcp921bi32t9y52twl8xja91zkj7imyvygd14c8f3f11e42fqajjodl78axiif2gcbtxdzjwdrayl1njpuv0quuqybd2gdn36h17gvl9u5j17p10of3xu612o6uqynpvey1e0mq5tpwp9hq92k9cbs35c6zpsl4ybkp22v5wdfd7abgrw74ji86a108c0l3zrf18xx9dct2j4nueyit48e869ku07xk7yosotqk8sggsiz5lw1f7md008pyqmako4de6ekmdy06l2xzd5y03i2cvgx80vqyat32z855xfom5jgfhy7euvncugbr35ch4kl7wkee3vablxy2ykpaxpy5pnursivpbxeceytg58ux4h8r26b8c4yy90g50fmv68wmvlzewtlr2zooe621189xfciozfznsy7jdjzghuvixsr6t8phfhp35xcb0d5t90w8x5pdg5ew25zsacqqy3qb201sn75t7i6pq406x48gsbultacez6g0j6hs29f13lsmgy0nuzpwx2oj0w13o7bzkva == \p\b\i\b\r\x\2\x\n\4\o\v\x\m\c\7\t\3\n\a\0\8\j\0\p\e\b\d\d\m\k\j\9\o\y\d\c\k\3\5\6\j\3\2\e\w\4\m\x\s\6\0\f\8\k\r\c\8\k\e\1\m\x\b\y\w\o\5\c\j\p\8\t\x\3\w\k\n\t\s\t\5\5\l\4\x\r\c\c\l\e\t\b\v\w\y\i\h\4\9\r\m\d\g\0\6\6\t\6\4\f\k\n\i\9\d\p\0\0\t\1\w\e\8\g\u\m\3\8\b\m\l\p\7\5\4\d\6\c\e\7\q\v\e\j\x\5\b\k\2\w\r\a\n\i\l\e\f\w\k\0\g\f\i\e\w\b\i\q\9\8\9\0\2\m\t\t\d\r\0\7\5\w\v\4\g\b\6\f\d\q\8\i\r\y\c\7\b\u\i\a\v\u\2\l\b\r\o\9\g\8\1\w\j\3\k\0\s\x\q\j\q\z\x\g\t\g\r\1\m\f\y\j\a\5\r\n\o\o\c\x\p\6\4\v\g\q\0\h\d\l\4\q\2\9\r\8\3\u\t\5\9\j\j\b\y\a\g\e\t\e\w\3\q\3\g\1\z\w\n\c\w\y\n\s\m\p\n\a\g\4\l\c\c\z\9\1\g\z\6\u\7\t\u\4\5\z\2\x\f\k\k\d\9\o\6\f\b\v\0\i\2\7\l\0\l\h\v\i\f\z\3\h\s\0\c\v\k\u\l\2\x\w\7\9\t\1\u\v\r\z\t\5\h\g\y\d\f\x\t\f\6\k\v\p\h\e\3\x\v\h\x\2\y\d\x\w\2\h\2\z\3\d\a\3\3\v\y\o\v\x\p\1\5\k\h\x\f\x\h\t\d\b\f\9\c\s\0\u\3\w\h\p\r\8\b\s\n\q\f\6\v\e\e\6\7\j\p\3\e\r\b\v\t\p\9\t\o\z\7\f\z\u\p\j\d\g\5\q\h\h\c\9\2\x\s\z\x\6\3\6\c\a\4\0\d\y\u\l\3\9\5\t\y\h\c\q\d\8\w\4\h\7\9\i\q\f\j\2\0\8\d\q\d\q\n\r\y\7\f\u\3\o\6\a\p\f\j\u\z\s\e\l\2\9\z\0\u\j\h\m\p\e\j\f\i\w\k\1\s\q\a\u\v\1\m\2\t\0\b\v\q\l\d\h\e\3\o\a\8\f\j\r\m\i\k\d\0\m\d\6\x\n\g\j\a\m\v\w\y\d\s\g\g\1\v\p\z\a\y\n\r\w\l\5\s\w\a\9\6\w\w\r\f\g\j\4\m\m\y\j\x\2\x\j\s\n\u\e\z\o\r\f\x\8\n\6\q\d\c\g\d\x\2\l\3\l\i\z\a\1\f\f\3\f\b\5\a\u\f\s\h\n\u\r\r\z\o\e\z\o\5\8\0\8\4\g\y\9\z\d\w\i\y\j\0\h\t\q\4\1\f\a\i\m\s\9\y\0\3\3\8\e\z\6\b\q\5\m\u\2\7\3\a\7\n\0\b\y\b\1\6\q\0\g\z\t\d\f\8\a\o\9\9\9\j\e\n\b\h\5\q\i\n\o\x\1\e\p\q\m\m\o\w\w\f\4\o\j\e\c\6\2\j\i\a\a\2\q\a\y\0\4\7\g\o\l\o\2\w\8\5\m\u\6\8\v\9\s\5\r\e\8\l\q\8\r\7\6\t\u\x\t\r\b\r\s\f\q\8\m\o\b\c\0\5\g\2\g\q\a\r\x\a\3\p\l\o\d\8\t\c\2\f\0\8\w\1\v\n\3\y\w\l\6\o\h\c\1\9\u\a\m\u\5\n\0\x\c\h\r\b\3\n\z\8\p\k\9\n\s\0\p\0\4\i\4\s\u\z\l\u\s\h\m\j\w\t\6\l\e\o\a\a\y\n\j\e\t\h\n\t\f\t\e\t\e\7\a\5\z\i\6\i\4\8\8\x\l\p\t\2\4\z\c\g\v\b\v\g\s\o\2\v\a\j\z\n\f\f\u\y\y\7\v\8\j\l\2\r\5\d\6\u\n\8\l\j\n\0\x\3\j\g\0\1\r\e\4\f\4\7\2\o\a\f\e\8\r\j\h\3\l\b\m\f\n\j\0\u\i\t\m\t\x\t\8\8\g\q\3\y\k\p\6\o\f\i\b\7\k\2\h\a\t\6\d\e\t\8\h\t\s\j\k\z\6\r\x\p\p\4\m\j\q\d\5\z\k\x\v\c\w\4\d\m\5\7\6\6\x\4\h\h\d\w\f\v\5\u\d\k\i\s\t\u\m\l\a\v\v\y\q\2\v\p\8\2\r\0\l\a\y\z\k\8\o\q\4\q\d\f\9\p\n\j\6\0\u\w\9\d\v\v\e\4\h\g\k\f\g\h\h\p\3\5\c\8\l\a\u\8\u\g\e\m\i\7\e\7\9\5\1\b\g\m\c\3\w\9\f\q\y\y\h\9\7\5\y\u\z\0\g\m\t\d\j\j\b\x\s\d\6\b\4\n\l\r\z\6\e\a\f\1\f\c\j\b\c\1\d\q\m\i\l\a\z\6\f\n\o\7\c\m\s\t\f\b\m\g\z\e\e\1\s\o\1\y\3\e\s\w\b\o\j\b\2\6\t\5\b\a\h\6\q\i\h\k\a\z\t\c\x\7\0\u\4\a\1\d\q\g\9\k\j\v\a\o\d\j\4\x\x\k\0\n\9\e\2\2\p\j\v\z\s\j\g\7\0\m\k\d\6\k\2\e\1\q\4\q\d\v\a\g\k\t\b\3\j\f\0\7\h\z\q\2\t\y\y\s\p\n\q\d\x\n\9\s\g\3\x\v\c\r\0\2\e\b\a\3\e\f\n\a\b\m\l\0\z\x\x\r\t\w\r\o\r\6\i\3\5\5\2\3\v\w\z\8\c\e\x\k\4\y\i\f\s\s\x\e\z\f\4\u\r\c\7\l\n\x\i\q\7\8\b\b\h\i\s\0\8\u\3\6\d\f\t\p\0\k\r\n\b\4\8\z\4\q\s\6\j\5\f\b\w\b\1\a\a\y\d\h\8\u\i\i\k\h\y\f\6\j\m\s\a\6\z\7\g\x\e\l\d\s\h\j\c\8\e\i\5\6\n\6\3\1\b\0\2\k\r\z\x\v\l\l\h\r\3\q\3\g\k\6\e\e\9\r\w\h\c\y\v\i\5\v\l\o\r\k\w\o\p\c\f\f\9\u\j\2\x\f\l\0\g\1\h\x\j\o\9\o\5\m\3\v\9\m\k\q\z\0\o\k\8\m\l\s\j\z\f\9\j\d\d\g\d\c\z\o\k\q\r\s\b\l\o\8\9\g\y\n\4\m\p\z\l\q\n\6\7\e\t\y\z\f\i\s\f\l\w\2\j\s\r\0\k\a\b\g\c\t\r\i\v\2\3\s\r\o\v\o\o\k\9\v\g\j\7\2\p\g\v\n\h\5\m\y\d\z\1\r\x\w\6\m\l\8\w\r\b\r\w\j\i\5\n\b\q\5\1\4\0\t\v\4\n\q\c\a\q\m\6\k\b\6\f\y\z\v\d\0\8\9\3\a\l\n\z\0\g\o\f\l\h\2\j\m\d\o\8\q\j\s\l\g\3\y\v\7\8\s\n\y\8\t\3\d\q\j\e\9\b\y\s\5\0\i\g\6\d\8\p\b\n\d\o\m\l\a\j\v\o\9\1\u\t\3\7\x\t\y\m\r\e\1\q\w\y\c\m\h\w\l\r\j\q\8\x\j\b\g\s\9\t\d\5\d\m\w\y\g\e\y\r\g\h\h\m\h\u\l\d\a\6\6\3\c\1\7\c\s\l\y\p\9\e\x\z\0\h\e\v\r\u\b\8\e\b\s\j\b\c\s\6\r\b\8\n\k\r\z\5\f\u\0\h\j\i\z\l\n\9\7\6\t\j\l\d\x\1\u\w\e\o\e\t\7\s\r\s\j\p\e\l\6\j\i\0\a\l\4\7\w\6\2\1\h\d\k\6\0\u\e\3\p\j\5\g\1\p\c\8\v\9\1\g\0\5\z\f\e\u\e\w\2\3\p\x\g\h\p\7\o\f\5\y\o\w\f\8\2\4\7\1\d\l\m\q\v\h\z\w\u\t\8\f\y\v\l\b\w\s\1\c\d\2\c\l\3\r\l\r\z\v\z\w\l\g\o\e\2\2\i\2\t\h\j\0\7\1\b\l\c\v\s\m\a\0\3\k\y\b\n\n\u\0\4\z\0\2\s\1\4\l\6\c\f\a\n\6\5\6\s\h\d\l\f\f\t\z\r\k\v\c\l\2\q\y\h\v\4\d\i\p\o\n\n\y\0\n\7\v\i\j\w\4\2\m\z\e\6\4\y\v\f\5\v\t\i\p\y\x\r\1\p\v\5\x\i\5\t\i\z\5\7\g\z\b\v\t\q\p\h\s\x\p\0\v\i\4\b\y\b\c\h\t\f\h\f\t\k\g\2\k\r\t\g\6\u\f\e\3\c\4\g\9\6\h\c\d\0\u\m\w\0\u\g\6\y\y\l\k\s\r\n\k\0\b\g\d\b\s\7\0\u\n\j\5\1\p\h\5\4\4\0\l\a\p\n\l\5\q\z\2\2\3\r\2\2\c\p\f\q\c\u\g\q\e\9\8\5\f\1\7\y\n\c\g\6\r\i\t\i\2\d\s\j\z\w\u\3\l\r\c\6\a\u\2\1\t\r\i\k\j\h\u\d\u\d\6\g\q\g\q\n\c\n\1\n\w\j\8\q\5\u\b\c\6\q\v\y\6\c\3\m\k\1\1\5\d\v\v\q\9\u\4\a\5\b\1\n\5\x\k\x\j\o\a\a\2\t\t\r\q\y\n\b\z\c\l\e\8\u\k\z\4\m\v\h\e\s\o\j\d\d\i\1\n\b\g\p\u\y\v\h\s\2\0\u\x\7\n\d\w\0\z\w\9\m\i\9\g\l\w\z\6\t\m\y\s\2\b\u\l\b\f\b\4\p\a\b\4\m\3\7\9\y\w\e\1\s\k\t\l\b\t\2\9\z\x\e\z\z\5\i\g\y\6\1\f\4\f\9\r\c\e\f\7\n\e\d\5\v\2\8\o\7\l\d\8\s\s\y\e\c\5\9\5\6\c\i\5\4\7\4\9\a\u\o\s\f\d\4\l\w\u\z\4\j\b\f\m\2\6\f\n\b\4\e\8\n\m\h\e\b\z\h\e\8\3\q\7\n\m\5\m\n\z\i\7\y\4\u\u\h\n\2\o\r\d\i\z\o\v\u\q\4\n\q\k\1\g\t\m\w\8\a\w\9\g\l\w\c\8\n\r\d\5\s\u\2\v\7\t\r\g\g\m\1\3\0\3\o\x\7\8\t\o\n\5\3\9\7\h\t\5\w\b\r\o\a\v\c\f\t\r\4\c\m\o\m\v\6\n\a\o\s\f\q\k\y\w\b\5\b\l\n\e\m\7\e\1\q\r\n\8\4\c\6\v\8\p\b\n\h\1\9\e\f\j\t\y\y\6\l\0\c\y\q\m\4\8\n\m\f\m\2\n\g\z\3\x\3\o\d\9\9\c\h\j\3\1\s\5\t\1\a\q\m\x\z\x\0\j\j\g\9\o\v\5\i\7\8\r\y\5\l\i\u\k\x\m\w\7\2\z\3\s\x\4\8\u\3\1\f\o\f\d\7\w\q\0\r\p\g\j\6\n\v\x\m\q\n\4\u\q\n\n\m\4\7\j\7\4\z\s\5\c\f\3\l\j\t\m\2\i\5\7\u\2\m\i\f\x\0\p\k\v\p\u\i\x\l\r\1\o\d\l\e\5\8\c\i\8\a\3\y\7\r\3\m\x\x\v\c\0\0\g\q\t\y\9\5\r\e\g\u\i\r\p\g\c\8\q\y\m\h\g\o\l\h\y\4\2\z\y\w\q\e\b\l\x\3\o\0\5\z\i\f\f\1\m\h\s\s\4\6\i\3\5\5\g\l\g\0\l\p\u\n\n\6\l\8\y\v\w\l\h\6\k\j\q\d\p\u\u\d\9\4\f\2\d\b\1\1\l\m\g\b\h\5\q\f\0\f\6\7\5\1\q\p\x\f\k\f\b\k\y\n\o\k\d\x\b\7\x\e\g\6\r\t\l\n\2\b\b\8\0\f\m\f\k\u\9\j\k\6\8\n\m\u\w\j\a\o\p\m\o\i\7\x\r\z\9\d\0\m\p\c\3\h\y\l\7\a\3\1\6\w\d\g\u\z\d\s\x\w\g\m\t\h\0\p\d\h\q\g\j\3\l\y\q\c\7\t\d\l\k\f\b\1\m\h\e\1\m\4\z\n\p\h\9\l\y\0\c\l\c\h\t\5\z\0\0\b\w\p\d\q\0\y\o\e\4\t\v\y\9\s\5\l\s\3\n\t\v\v\b\c\s\g\9\4\i\9\k\w\h\j\f\5\9\m\n\b\d\c\v\t\w\d\g\i\2\1\v\3\w\4\j\t\o\1\s\q\i\g\8\8\b\0\j\f\y\t\j\r\a\z\m\u\r\c\y\e\d\2\5\a\c\6\v\x\j\d\k\1\2\s\8\v\t\9\f\j\m\2\d\c\d\e\0\6\y\i\8\r\b\3\z\6\u\q\c\o\f\a\f\a\5\f\0\0\s\s\4\p\m\h\i\g\v\x\8\1\i\y\2\2\w\z\p\1\c\b\d\e\o\a\t\l\i\o\6\b\7\z\g\8\0\h\k\v\m\k\x\0\w\y\d\2\i\i\g\i\1\8\l\n\5\p\a\h\r\t\9\5\q\k\0\e\u\k\j\9\s\9\w\u\4\8\3\7\7\t\f\p\p\u\a\0\m\z\9\e\c\i\6\3\y\r\h\2\g\w\5\f\s\i\b\0\w\1\e\o\i\b\l\e\6\m\x\m\i\9\b\c\a\g\3\a\n\o\k\q\h\j\m\r\w\s\h\o\2\u\s\i\g\n\5\d\e\0\i\i\7\8\6\c\8\0\9\5\r\u\z\n\v\l\w\2\5\h\9\k\2\t\k\e\1\6\p\5\1\6\p\e\9\x\7\o\4\t\w\t\m\2\3\d\l\q\y\7\3\d\u\v\s\i\9\q\0\k\a\g\s\o\3\k\7\m\6\j\7\3\u\l\b\w\t\r\1\6\m\a\6\r\y\l\9\8\d\e\v\y\v\n\s\1\j\c\d\f\1\i\z\a\a\r\x\x\4\e\p\z\r\w\t\k\d\p\4\5\s\v\n\d\4\o\g\p\b\o\a\t\c\j\n\w\z\y\0\0\1\k\b\p\d\l\l\r\b\4\p\m\c\n\7\b\a\w\v\f\o\b\9\g\p\v\9\w\8\u\d\z\c\t\r\u\a\r\1\t\0\s\s\r\j\c\v\0\7\q\0\y\b\9\c\q\v\a\p\w\z\m\t\e\n\l\k\j\v\y\d\7\d\9\0\k\o\s\q\p\8\p\t\6\i\2\1\w\l\x\3\k\c\3\c\4\a\i\c\k\g\7\l\7\0\3\5\9\y\t\i\7\o\8\j\t\z\m\j\z\e\f\a\1\j\y\d\i\0\y\q\t\z\n\q\s\0\a\y\b\e\9\l\l\a\r\6\u\5\y\k\n\y\6\3\t\i\x\0\u\m\g\l\p\g\y\q\g\r\5\g\y\1\i\y\9\j\g\y\5\7\a\b\e\b\y\g\x\7\5\v\v\1\i\t\f\e\7\f\r\b\5\2\y\h\a\7\y\1\q\3\h\8\y\c\b\q\l\2\0\v\e\z\l\e\u\d\3\z\3\u\j\i\q\k\e\w\x\7\r\9\1\a\w\1\i\f\0\7\1\h\1\x\u\r\c\p\9\2\1\b\i\3\2\t\9\y\5\2\t\w\l\8\x\j\a\9\1\z\k\j\7\i\m\y\v\y\g\d\1\4\c\8\f\3\f\1\1\e\4\2\f\q\a\j\j\o\d\l\7\8\a\x\i\i\f\2\g\c\b\t\x\d\z\j\w\d\r\a\y\l\1\n\j\p\u\v\0\q\u\u\q\y\b\d\2\g\d\n\3\6\h\1\7\g\v\l\9\u\5\j\1\7\p\1\0\o\f\3\x\u\6\1\2\o\6\u\q\y\n\p\v\e\y\1\e\0\m\q\5\t\p\w\p\9\h\q\9\2\k\9\c\b\s\3\5\c\6\z\p\s\l\4\y\b\k\p\2\2\v\5\w\d\f\d\7\a\b\g\r\w\7\4\j\i\8\6\a\1\0\8\c\0\l\3\z\r\f\1\8\x\x\9\d\c\t\2\j\4\n\u\e\y\i\t\4\8\e\8\6\9\k\u\0\7\x\k\7\y\o\s\o\t\q\k\8\s\g\g\s\i\z\5\l\w\1\f\7\m\d\0\0\8\p\y\q\m\a\k\o\4\d\e\6\e\k\m\d\y\0\6\l\2\x\z\d\5\y\0\3\i\2\c\v\g\x\8\0\v\q\y\a\t\3\2\z\8\5\5\x\f\o\m\5\j\g\f\h\y\7\e\u\v\n\c\u\g\b\r\3\5\c\h\4\k\l\7\w\k\e\e\3\v\a\b\l\x\y\2\y\k\p\a\x\p\y\5\p\n\u\r\s\i\v\p\b\x\e\c\e\y\t\g\5\8\u\x\4\h\8\r\2\6\b\8\c\4\y\y\9\0\g\5\0\f\m\v\6\8\w\m\v\l\z\e\w\t\l\r\2\z\o\o\e\6\2\1\1\8\9\x\f\c\i\o\z\f\z\n\s\y\7\j\d\j\z\g\h\u\v\i\x\s\r\6\t\8\p\h\f\h\p\3\5\x\c\b\0\d\5\t\9\0\w\8\x\5\p\d\g\5\e\w\2\5\z\s\a\c\q\q\y\3\q\b\2\0\1\s\n\7\5\t\7\i\6\p\q\4\0\6\x\4\8\g\s\b\u\l\t\a\c\e\z\6\g\0\j\6\h\s\2\9\f\1\3\l\s\m\g\y\0\n\u\z\p\w\x\2\o\j\0\w\1\3\o\7\b\z\k\v\a ]] 00:06:55.571 00:06:55.571 real 0m1.338s 00:06:55.571 user 0m0.908s 00:06:55.571 sys 0m0.638s 00:06:55.571 ************************************ 00:06:55.571 END TEST dd_rw_offset 00:06:55.571 ************************************ 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.571 13:26:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.571 { 00:06:55.571 "subsystems": [ 00:06:55.571 { 00:06:55.571 "subsystem": "bdev", 00:06:55.571 "config": [ 00:06:55.571 { 00:06:55.571 "params": { 00:06:55.571 "trtype": "pcie", 00:06:55.571 "traddr": "0000:00:10.0", 00:06:55.571 "name": "Nvme0" 00:06:55.571 }, 00:06:55.571 "method": "bdev_nvme_attach_controller" 00:06:55.571 }, 00:06:55.571 { 00:06:55.571 "method": "bdev_wait_for_examine" 00:06:55.571 } 00:06:55.571 ] 00:06:55.571 } 00:06:55.571 ] 00:06:55.571 } 00:06:55.571 [2024-11-20 13:26:07.400840] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:55.571 [2024-11-20 13:26:07.401325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ] 00:06:55.830 [2024-11-20 13:26:07.549576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.830 [2024-11-20 13:26:07.604535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.830 [2024-11-20 13:26:07.659451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.830  [2024-11-20T13:26:08.046Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:56.089 00:06:56.089 13:26:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.090 ************************************ 00:06:56.090 END TEST spdk_dd_basic_rw 00:06:56.090 ************************************ 00:06:56.090 00:06:56.090 real 0m18.568s 00:06:56.090 user 0m13.265s 00:06:56.090 sys 0m7.145s 00:06:56.090 13:26:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.090 13:26:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 13:26:08 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:56.090 13:26:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.090 13:26:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.090 13:26:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 ************************************ 00:06:56.090 START TEST spdk_dd_posix 00:06:56.090 ************************************ 00:06:56.090 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:56.404 * Looking for test storage... 00:06:56.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.404 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.405 --rc genhtml_branch_coverage=1 00:06:56.405 --rc genhtml_function_coverage=1 00:06:56.405 --rc genhtml_legend=1 00:06:56.405 --rc geninfo_all_blocks=1 00:06:56.405 --rc geninfo_unexecuted_blocks=1 00:06:56.405 00:06:56.405 ' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.405 --rc genhtml_branch_coverage=1 00:06:56.405 --rc genhtml_function_coverage=1 00:06:56.405 --rc genhtml_legend=1 00:06:56.405 --rc geninfo_all_blocks=1 00:06:56.405 --rc geninfo_unexecuted_blocks=1 00:06:56.405 00:06:56.405 ' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.405 --rc genhtml_branch_coverage=1 00:06:56.405 --rc genhtml_function_coverage=1 00:06:56.405 --rc genhtml_legend=1 00:06:56.405 --rc geninfo_all_blocks=1 00:06:56.405 --rc geninfo_unexecuted_blocks=1 00:06:56.405 00:06:56.405 ' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.405 --rc genhtml_branch_coverage=1 00:06:56.405 --rc genhtml_function_coverage=1 00:06:56.405 --rc genhtml_legend=1 00:06:56.405 --rc geninfo_all_blocks=1 00:06:56.405 --rc geninfo_unexecuted_blocks=1 00:06:56.405 00:06:56.405 ' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:56.405 * First test run, liburing in use 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 ************************************ 00:06:56.405 START TEST dd_flag_append 00:06:56.405 ************************************ 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=augntsunsdtpey5brhs0ftew27mcd06a 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=w2gsnhrashekcpz40hg1ns4hotx7d0hd 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s augntsunsdtpey5brhs0ftew27mcd06a 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s w2gsnhrashekcpz40hg1ns4hotx7d0hd 00:06:56.405 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:56.405 [2024-11-20 13:26:08.302746] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:56.405 [2024-11-20 13:26:08.303367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:06:56.664 [2024-11-20 13:26:08.451271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.664 [2024-11-20 13:26:08.516103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.664 [2024-11-20 13:26:08.577776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.924  [2024-11-20T13:26:08.881Z] Copying: 32/32 [B] (average 31 kBps) 00:06:56.924 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ w2gsnhrashekcpz40hg1ns4hotx7d0hdaugntsunsdtpey5brhs0ftew27mcd06a == \w\2\g\s\n\h\r\a\s\h\e\k\c\p\z\4\0\h\g\1\n\s\4\h\o\t\x\7\d\0\h\d\a\u\g\n\t\s\u\n\s\d\t\p\e\y\5\b\r\h\s\0\f\t\e\w\2\7\m\c\d\0\6\a ]] 00:06:56.924 00:06:56.924 real 0m0.576s 00:06:56.924 user 0m0.294s 00:06:56.924 sys 0m0.317s 00:06:56.924 ************************************ 00:06:56.924 END TEST dd_flag_append 00:06:56.924 ************************************ 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:56.924 ************************************ 00:06:56.924 START TEST dd_flag_directory 00:06:56.924 ************************************ 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.924 13:26:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.183 [2024-11-20 13:26:08.928982] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:57.183 [2024-11-20 13:26:08.929092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60262 ] 00:06:57.183 [2024-11-20 13:26:09.077294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.183 [2024-11-20 13:26:09.135331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.443 [2024-11-20 13:26:09.193057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.443 [2024-11-20 13:26:09.234718] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.443 [2024-11-20 13:26:09.234788] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.443 [2024-11-20 13:26:09.234822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.443 [2024-11-20 13:26:09.360391] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.702 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.702 [2024-11-20 13:26:09.494729] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:57.702 [2024-11-20 13:26:09.494850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:06:57.702 [2024-11-20 13:26:09.641770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.961 [2024-11-20 13:26:09.687557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.961 [2024-11-20 13:26:09.743550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.961 [2024-11-20 13:26:09.782959] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.961 [2024-11-20 13:26:09.783015] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.961 [2024-11-20 13:26:09.783051] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.961 [2024-11-20 13:26:09.906414] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.219 00:06:58.219 real 0m1.116s 00:06:58.219 user 0m0.617s 00:06:58.219 sys 0m0.290s 00:06:58.219 ************************************ 00:06:58.219 END TEST dd_flag_directory 00:06:58.219 ************************************ 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.219 13:26:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.219 ************************************ 00:06:58.219 START TEST dd_flag_nofollow 00:06:58.219 ************************************ 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.219 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.220 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.220 [2024-11-20 13:26:10.085952] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:58.220 [2024-11-20 13:26:10.086039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:06:58.479 [2024-11-20 13:26:10.225476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.479 [2024-11-20 13:26:10.283849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.479 [2024-11-20 13:26:10.339758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.479 [2024-11-20 13:26:10.380997] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.479 [2024-11-20 13:26:10.381379] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.479 [2024-11-20 13:26:10.381407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.738 [2024-11-20 13:26:10.508021] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.738 13:26:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.738 [2024-11-20 13:26:10.645061] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:58.738 [2024-11-20 13:26:10.645421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60309 ] 00:06:58.997 [2024-11-20 13:26:10.794983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.997 [2024-11-20 13:26:10.869654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.997 [2024-11-20 13:26:10.930461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.255 [2024-11-20 13:26:10.974097] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.255 [2024-11-20 13:26:10.974163] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.255 [2024-11-20 13:26:10.974217] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.255 [2024-11-20 13:26:11.103720] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:59.255 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.514 [2024-11-20 13:26:11.238154] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:06:59.514 [2024-11-20 13:26:11.238461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60323 ] 00:06:59.514 [2024-11-20 13:26:11.383372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.514 [2024-11-20 13:26:11.440955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.771 [2024-11-20 13:26:11.499240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.771  [2024-11-20T13:26:11.729Z] Copying: 512/512 [B] (average 500 kBps) 00:06:59.772 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ lqq6f537z4z3e9vmtb0pxvmfbcn9yzow9oko4kvdl483uysqswtjm9ccvz018pja0gg05w3scjdxf9i4owr9selj34bvhwqnn0jrvl3wfiti1kctj8hzkpdujct5waf2l8rz7pk3hasa4wkitceol8fxwm7jycgep9pbmv94evklaerhpwsncw0ugqtjc89281to1gscn4ctodun56qmf5091zy8v9mc5eonl3xvy0iihj0wlwjqchs3m6hb8imr3m5x02qqjvbotjtd0khh9fhvjo0rniggw8h4amb4vat5lrvex9vdq6h5v5v91tz9cbxozmitv7we9jr99qhc82kgkz358d3i4b4kgzpju2obdf0c3w66cum6yj36vrpaoluzmk3z2s1ucz6gh458j27e9u58nd3dpwouzpjyxnv96o3ey99o1vu2nw99zz2yssemmla7j3wmoc1n3h3poe0raaqom5gh9xg7gwvhsy0k5hkpdclrxnkmrnqagqg5 == \l\q\q\6\f\5\3\7\z\4\z\3\e\9\v\m\t\b\0\p\x\v\m\f\b\c\n\9\y\z\o\w\9\o\k\o\4\k\v\d\l\4\8\3\u\y\s\q\s\w\t\j\m\9\c\c\v\z\0\1\8\p\j\a\0\g\g\0\5\w\3\s\c\j\d\x\f\9\i\4\o\w\r\9\s\e\l\j\3\4\b\v\h\w\q\n\n\0\j\r\v\l\3\w\f\i\t\i\1\k\c\t\j\8\h\z\k\p\d\u\j\c\t\5\w\a\f\2\l\8\r\z\7\p\k\3\h\a\s\a\4\w\k\i\t\c\e\o\l\8\f\x\w\m\7\j\y\c\g\e\p\9\p\b\m\v\9\4\e\v\k\l\a\e\r\h\p\w\s\n\c\w\0\u\g\q\t\j\c\8\9\2\8\1\t\o\1\g\s\c\n\4\c\t\o\d\u\n\5\6\q\m\f\5\0\9\1\z\y\8\v\9\m\c\5\e\o\n\l\3\x\v\y\0\i\i\h\j\0\w\l\w\j\q\c\h\s\3\m\6\h\b\8\i\m\r\3\m\5\x\0\2\q\q\j\v\b\o\t\j\t\d\0\k\h\h\9\f\h\v\j\o\0\r\n\i\g\g\w\8\h\4\a\m\b\4\v\a\t\5\l\r\v\e\x\9\v\d\q\6\h\5\v\5\v\9\1\t\z\9\c\b\x\o\z\m\i\t\v\7\w\e\9\j\r\9\9\q\h\c\8\2\k\g\k\z\3\5\8\d\3\i\4\b\4\k\g\z\p\j\u\2\o\b\d\f\0\c\3\w\6\6\c\u\m\6\y\j\3\6\v\r\p\a\o\l\u\z\m\k\3\z\2\s\1\u\c\z\6\g\h\4\5\8\j\2\7\e\9\u\5\8\n\d\3\d\p\w\o\u\z\p\j\y\x\n\v\9\6\o\3\e\y\9\9\o\1\v\u\2\n\w\9\9\z\z\2\y\s\s\e\m\m\l\a\7\j\3\w\m\o\c\1\n\3\h\3\p\o\e\0\r\a\a\q\o\m\5\g\h\9\x\g\7\g\w\v\h\s\y\0\k\5\h\k\p\d\c\l\r\x\n\k\m\r\n\q\a\g\q\g\5 ]] 00:07:00.030 00:07:00.030 real 0m1.701s 00:07:00.030 user 0m0.918s 00:07:00.030 sys 0m0.603s 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.030 ************************************ 00:07:00.030 END TEST dd_flag_nofollow 00:07:00.030 ************************************ 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.030 ************************************ 00:07:00.030 START TEST dd_flag_noatime 00:07:00.030 ************************************ 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732109171 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732109171 00:07:00.030 13:26:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:00.965 13:26:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.965 [2024-11-20 13:26:12.867534] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:00.965 [2024-11-20 13:26:12.867675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60360 ] 00:07:01.225 [2024-11-20 13:26:13.020756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.225 [2024-11-20 13:26:13.086754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.225 [2024-11-20 13:26:13.148236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.484  [2024-11-20T13:26:13.441Z] Copying: 512/512 [B] (average 500 kBps) 00:07:01.484 00:07:01.484 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.484 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732109171 )) 00:07:01.484 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.484 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732109171 )) 00:07:01.484 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.747 [2024-11-20 13:26:13.460645] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:01.747 [2024-11-20 13:26:13.460942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60379 ] 00:07:01.747 [2024-11-20 13:26:13.613636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.747 [2024-11-20 13:26:13.681674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.006 [2024-11-20 13:26:13.743073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.006  [2024-11-20T13:26:14.239Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.282 00:07:02.282 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.282 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732109173 )) 00:07:02.282 00:07:02.282 real 0m2.210s 00:07:02.282 user 0m0.659s 00:07:02.282 sys 0m0.617s 00:07:02.282 ************************************ 00:07:02.282 END TEST dd_flag_noatime 00:07:02.282 ************************************ 00:07:02.282 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.282 13:26:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:02.282 ************************************ 00:07:02.282 START TEST dd_flags_misc 00:07:02.282 ************************************ 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.282 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:02.282 [2024-11-20 13:26:14.111509] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:02.282 [2024-11-20 13:26:14.111605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:07:02.566 [2024-11-20 13:26:14.253542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.566 [2024-11-20 13:26:14.310447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.566 [2024-11-20 13:26:14.369277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.566  [2024-11-20T13:26:14.781Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.824 00:07:02.824 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9g64rfsamtzz9cgtnihw7fq293wzxg14hnupheobk9838v77b6f6yampce8rbvmzhgadwy9s5rqv3qw6jhi85ql49f9a1kt5v7ul1che4mdu6s4d8evb4c6e4cr2phlcja01pont510qm6m8eixpaw3jamdg61b6my4zpgv6y2mxsqk54detounfcrdaynxha8opxnc8jtiecs7hqs4xashmpuxxeyksot3oj6ez0klzwuobek55o70f773qkqxknmczrdxnwj0fvabs9xnvuoi6a273c994c7kolf4507hs14ak6bkfwqyeekjy3f9x522lk91rivt9a6a1d0d1kgmy0hzsqg5vpjx65l76k3unv6d2o7713zd2wm6pk64rwtf3uhbahi12fmr46rx3cr0qcnx5ajcz4a07mhd0xpgzq0lvd2j9n8lqvum9i3sf6ci93br2abkkdz2a6i6prkukufk9noqrdqrjc63f1r91l8ocy2zz6dd7vej6hoox == \9\g\6\4\r\f\s\a\m\t\z\z\9\c\g\t\n\i\h\w\7\f\q\2\9\3\w\z\x\g\1\4\h\n\u\p\h\e\o\b\k\9\8\3\8\v\7\7\b\6\f\6\y\a\m\p\c\e\8\r\b\v\m\z\h\g\a\d\w\y\9\s\5\r\q\v\3\q\w\6\j\h\i\8\5\q\l\4\9\f\9\a\1\k\t\5\v\7\u\l\1\c\h\e\4\m\d\u\6\s\4\d\8\e\v\b\4\c\6\e\4\c\r\2\p\h\l\c\j\a\0\1\p\o\n\t\5\1\0\q\m\6\m\8\e\i\x\p\a\w\3\j\a\m\d\g\6\1\b\6\m\y\4\z\p\g\v\6\y\2\m\x\s\q\k\5\4\d\e\t\o\u\n\f\c\r\d\a\y\n\x\h\a\8\o\p\x\n\c\8\j\t\i\e\c\s\7\h\q\s\4\x\a\s\h\m\p\u\x\x\e\y\k\s\o\t\3\o\j\6\e\z\0\k\l\z\w\u\o\b\e\k\5\5\o\7\0\f\7\7\3\q\k\q\x\k\n\m\c\z\r\d\x\n\w\j\0\f\v\a\b\s\9\x\n\v\u\o\i\6\a\2\7\3\c\9\9\4\c\7\k\o\l\f\4\5\0\7\h\s\1\4\a\k\6\b\k\f\w\q\y\e\e\k\j\y\3\f\9\x\5\2\2\l\k\9\1\r\i\v\t\9\a\6\a\1\d\0\d\1\k\g\m\y\0\h\z\s\q\g\5\v\p\j\x\6\5\l\7\6\k\3\u\n\v\6\d\2\o\7\7\1\3\z\d\2\w\m\6\p\k\6\4\r\w\t\f\3\u\h\b\a\h\i\1\2\f\m\r\4\6\r\x\3\c\r\0\q\c\n\x\5\a\j\c\z\4\a\0\7\m\h\d\0\x\p\g\z\q\0\l\v\d\2\j\9\n\8\l\q\v\u\m\9\i\3\s\f\6\c\i\9\3\b\r\2\a\b\k\k\d\z\2\a\6\i\6\p\r\k\u\k\u\f\k\9\n\o\q\r\d\q\r\j\c\6\3\f\1\r\9\1\l\8\o\c\y\2\z\z\6\d\d\7\v\e\j\6\h\o\o\x ]] 00:07:02.824 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.824 13:26:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:02.824 [2024-11-20 13:26:14.649239] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:02.824 [2024-11-20 13:26:14.649331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:07:03.082 [2024-11-20 13:26:14.795075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.082 [2024-11-20 13:26:14.859842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.082 [2024-11-20 13:26:14.920063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.082  [2024-11-20T13:26:15.299Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.342 00:07:03.342 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9g64rfsamtzz9cgtnihw7fq293wzxg14hnupheobk9838v77b6f6yampce8rbvmzhgadwy9s5rqv3qw6jhi85ql49f9a1kt5v7ul1che4mdu6s4d8evb4c6e4cr2phlcja01pont510qm6m8eixpaw3jamdg61b6my4zpgv6y2mxsqk54detounfcrdaynxha8opxnc8jtiecs7hqs4xashmpuxxeyksot3oj6ez0klzwuobek55o70f773qkqxknmczrdxnwj0fvabs9xnvuoi6a273c994c7kolf4507hs14ak6bkfwqyeekjy3f9x522lk91rivt9a6a1d0d1kgmy0hzsqg5vpjx65l76k3unv6d2o7713zd2wm6pk64rwtf3uhbahi12fmr46rx3cr0qcnx5ajcz4a07mhd0xpgzq0lvd2j9n8lqvum9i3sf6ci93br2abkkdz2a6i6prkukufk9noqrdqrjc63f1r91l8ocy2zz6dd7vej6hoox == \9\g\6\4\r\f\s\a\m\t\z\z\9\c\g\t\n\i\h\w\7\f\q\2\9\3\w\z\x\g\1\4\h\n\u\p\h\e\o\b\k\9\8\3\8\v\7\7\b\6\f\6\y\a\m\p\c\e\8\r\b\v\m\z\h\g\a\d\w\y\9\s\5\r\q\v\3\q\w\6\j\h\i\8\5\q\l\4\9\f\9\a\1\k\t\5\v\7\u\l\1\c\h\e\4\m\d\u\6\s\4\d\8\e\v\b\4\c\6\e\4\c\r\2\p\h\l\c\j\a\0\1\p\o\n\t\5\1\0\q\m\6\m\8\e\i\x\p\a\w\3\j\a\m\d\g\6\1\b\6\m\y\4\z\p\g\v\6\y\2\m\x\s\q\k\5\4\d\e\t\o\u\n\f\c\r\d\a\y\n\x\h\a\8\o\p\x\n\c\8\j\t\i\e\c\s\7\h\q\s\4\x\a\s\h\m\p\u\x\x\e\y\k\s\o\t\3\o\j\6\e\z\0\k\l\z\w\u\o\b\e\k\5\5\o\7\0\f\7\7\3\q\k\q\x\k\n\m\c\z\r\d\x\n\w\j\0\f\v\a\b\s\9\x\n\v\u\o\i\6\a\2\7\3\c\9\9\4\c\7\k\o\l\f\4\5\0\7\h\s\1\4\a\k\6\b\k\f\w\q\y\e\e\k\j\y\3\f\9\x\5\2\2\l\k\9\1\r\i\v\t\9\a\6\a\1\d\0\d\1\k\g\m\y\0\h\z\s\q\g\5\v\p\j\x\6\5\l\7\6\k\3\u\n\v\6\d\2\o\7\7\1\3\z\d\2\w\m\6\p\k\6\4\r\w\t\f\3\u\h\b\a\h\i\1\2\f\m\r\4\6\r\x\3\c\r\0\q\c\n\x\5\a\j\c\z\4\a\0\7\m\h\d\0\x\p\g\z\q\0\l\v\d\2\j\9\n\8\l\q\v\u\m\9\i\3\s\f\6\c\i\9\3\b\r\2\a\b\k\k\d\z\2\a\6\i\6\p\r\k\u\k\u\f\k\9\n\o\q\r\d\q\r\j\c\6\3\f\1\r\9\1\l\8\o\c\y\2\z\z\6\d\d\7\v\e\j\6\h\o\o\x ]] 00:07:03.342 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.342 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:03.342 [2024-11-20 13:26:15.216839] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:03.342 [2024-11-20 13:26:15.216937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:07:03.599 [2024-11-20 13:26:15.371799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.599 [2024-11-20 13:26:15.438373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.599 [2024-11-20 13:26:15.505225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.858  [2024-11-20T13:26:15.815Z] Copying: 512/512 [B] (average 125 kBps) 00:07:03.858 00:07:03.858 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9g64rfsamtzz9cgtnihw7fq293wzxg14hnupheobk9838v77b6f6yampce8rbvmzhgadwy9s5rqv3qw6jhi85ql49f9a1kt5v7ul1che4mdu6s4d8evb4c6e4cr2phlcja01pont510qm6m8eixpaw3jamdg61b6my4zpgv6y2mxsqk54detounfcrdaynxha8opxnc8jtiecs7hqs4xashmpuxxeyksot3oj6ez0klzwuobek55o70f773qkqxknmczrdxnwj0fvabs9xnvuoi6a273c994c7kolf4507hs14ak6bkfwqyeekjy3f9x522lk91rivt9a6a1d0d1kgmy0hzsqg5vpjx65l76k3unv6d2o7713zd2wm6pk64rwtf3uhbahi12fmr46rx3cr0qcnx5ajcz4a07mhd0xpgzq0lvd2j9n8lqvum9i3sf6ci93br2abkkdz2a6i6prkukufk9noqrdqrjc63f1r91l8ocy2zz6dd7vej6hoox == \9\g\6\4\r\f\s\a\m\t\z\z\9\c\g\t\n\i\h\w\7\f\q\2\9\3\w\z\x\g\1\4\h\n\u\p\h\e\o\b\k\9\8\3\8\v\7\7\b\6\f\6\y\a\m\p\c\e\8\r\b\v\m\z\h\g\a\d\w\y\9\s\5\r\q\v\3\q\w\6\j\h\i\8\5\q\l\4\9\f\9\a\1\k\t\5\v\7\u\l\1\c\h\e\4\m\d\u\6\s\4\d\8\e\v\b\4\c\6\e\4\c\r\2\p\h\l\c\j\a\0\1\p\o\n\t\5\1\0\q\m\6\m\8\e\i\x\p\a\w\3\j\a\m\d\g\6\1\b\6\m\y\4\z\p\g\v\6\y\2\m\x\s\q\k\5\4\d\e\t\o\u\n\f\c\r\d\a\y\n\x\h\a\8\o\p\x\n\c\8\j\t\i\e\c\s\7\h\q\s\4\x\a\s\h\m\p\u\x\x\e\y\k\s\o\t\3\o\j\6\e\z\0\k\l\z\w\u\o\b\e\k\5\5\o\7\0\f\7\7\3\q\k\q\x\k\n\m\c\z\r\d\x\n\w\j\0\f\v\a\b\s\9\x\n\v\u\o\i\6\a\2\7\3\c\9\9\4\c\7\k\o\l\f\4\5\0\7\h\s\1\4\a\k\6\b\k\f\w\q\y\e\e\k\j\y\3\f\9\x\5\2\2\l\k\9\1\r\i\v\t\9\a\6\a\1\d\0\d\1\k\g\m\y\0\h\z\s\q\g\5\v\p\j\x\6\5\l\7\6\k\3\u\n\v\6\d\2\o\7\7\1\3\z\d\2\w\m\6\p\k\6\4\r\w\t\f\3\u\h\b\a\h\i\1\2\f\m\r\4\6\r\x\3\c\r\0\q\c\n\x\5\a\j\c\z\4\a\0\7\m\h\d\0\x\p\g\z\q\0\l\v\d\2\j\9\n\8\l\q\v\u\m\9\i\3\s\f\6\c\i\9\3\b\r\2\a\b\k\k\d\z\2\a\6\i\6\p\r\k\u\k\u\f\k\9\n\o\q\r\d\q\r\j\c\6\3\f\1\r\9\1\l\8\o\c\y\2\z\z\6\d\d\7\v\e\j\6\h\o\o\x ]] 00:07:03.858 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.858 13:26:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:04.117 [2024-11-20 13:26:15.816882] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:04.117 [2024-11-20 13:26:15.816983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60438 ] 00:07:04.117 [2024-11-20 13:26:15.966576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.117 [2024-11-20 13:26:16.029469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.375 [2024-11-20 13:26:16.089500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.375  [2024-11-20T13:26:16.332Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.375 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9g64rfsamtzz9cgtnihw7fq293wzxg14hnupheobk9838v77b6f6yampce8rbvmzhgadwy9s5rqv3qw6jhi85ql49f9a1kt5v7ul1che4mdu6s4d8evb4c6e4cr2phlcja01pont510qm6m8eixpaw3jamdg61b6my4zpgv6y2mxsqk54detounfcrdaynxha8opxnc8jtiecs7hqs4xashmpuxxeyksot3oj6ez0klzwuobek55o70f773qkqxknmczrdxnwj0fvabs9xnvuoi6a273c994c7kolf4507hs14ak6bkfwqyeekjy3f9x522lk91rivt9a6a1d0d1kgmy0hzsqg5vpjx65l76k3unv6d2o7713zd2wm6pk64rwtf3uhbahi12fmr46rx3cr0qcnx5ajcz4a07mhd0xpgzq0lvd2j9n8lqvum9i3sf6ci93br2abkkdz2a6i6prkukufk9noqrdqrjc63f1r91l8ocy2zz6dd7vej6hoox == \9\g\6\4\r\f\s\a\m\t\z\z\9\c\g\t\n\i\h\w\7\f\q\2\9\3\w\z\x\g\1\4\h\n\u\p\h\e\o\b\k\9\8\3\8\v\7\7\b\6\f\6\y\a\m\p\c\e\8\r\b\v\m\z\h\g\a\d\w\y\9\s\5\r\q\v\3\q\w\6\j\h\i\8\5\q\l\4\9\f\9\a\1\k\t\5\v\7\u\l\1\c\h\e\4\m\d\u\6\s\4\d\8\e\v\b\4\c\6\e\4\c\r\2\p\h\l\c\j\a\0\1\p\o\n\t\5\1\0\q\m\6\m\8\e\i\x\p\a\w\3\j\a\m\d\g\6\1\b\6\m\y\4\z\p\g\v\6\y\2\m\x\s\q\k\5\4\d\e\t\o\u\n\f\c\r\d\a\y\n\x\h\a\8\o\p\x\n\c\8\j\t\i\e\c\s\7\h\q\s\4\x\a\s\h\m\p\u\x\x\e\y\k\s\o\t\3\o\j\6\e\z\0\k\l\z\w\u\o\b\e\k\5\5\o\7\0\f\7\7\3\q\k\q\x\k\n\m\c\z\r\d\x\n\w\j\0\f\v\a\b\s\9\x\n\v\u\o\i\6\a\2\7\3\c\9\9\4\c\7\k\o\l\f\4\5\0\7\h\s\1\4\a\k\6\b\k\f\w\q\y\e\e\k\j\y\3\f\9\x\5\2\2\l\k\9\1\r\i\v\t\9\a\6\a\1\d\0\d\1\k\g\m\y\0\h\z\s\q\g\5\v\p\j\x\6\5\l\7\6\k\3\u\n\v\6\d\2\o\7\7\1\3\z\d\2\w\m\6\p\k\6\4\r\w\t\f\3\u\h\b\a\h\i\1\2\f\m\r\4\6\r\x\3\c\r\0\q\c\n\x\5\a\j\c\z\4\a\0\7\m\h\d\0\x\p\g\z\q\0\l\v\d\2\j\9\n\8\l\q\v\u\m\9\i\3\s\f\6\c\i\9\3\b\r\2\a\b\k\k\d\z\2\a\6\i\6\p\r\k\u\k\u\f\k\9\n\o\q\r\d\q\r\j\c\6\3\f\1\r\9\1\l\8\o\c\y\2\z\z\6\d\d\7\v\e\j\6\h\o\o\x ]] 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.633 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:04.633 [2024-11-20 13:26:16.404110] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:04.633 [2024-11-20 13:26:16.404218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60453 ] 00:07:04.633 [2024-11-20 13:26:16.551393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.891 [2024-11-20 13:26:16.615892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.891 [2024-11-20 13:26:16.675486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.891  [2024-11-20T13:26:17.107Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.150 00:07:05.150 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 23icd5jjq0fjplgdo0rkbd7q8b39mp4pz4f06yd9dsamq2mohjxizi6dz7dgtl51grr0voegb2x2b9nh2oxaha9qkhapyadqqok9lz7uy9h01ge2tux5iwyglldcwh5lzwx66iwq8fceb9f3c6i5eo8wmrbymos5qz0gronx8zkkdwzrwxlzf6x9mwotnrnc79i3rkk176i8mhyddvxn2gps19zpaqi8hquahuse1wa0zjhfg7zmcqe5sp74uwiebl609xzdqc5v2bd3nae64907hy7zoevtaeyxdpw7ecio4qb56k4rgqwnnw4isuma1vs3db0byqqxhryomfz55otjjsfcmb43ygzemt5055g7pwwe1w4kdtlswxs58i5mor9kkr0a3yarxjo9p57lttrzgt442ijse6i35eoin53je9pj0v6gdqydvi87u9wh5g54ls3dkjy80tquqmh6gtu2dbey0xumkr2gcr5jhm06tvp79d42k2snorz9dclb == \2\3\i\c\d\5\j\j\q\0\f\j\p\l\g\d\o\0\r\k\b\d\7\q\8\b\3\9\m\p\4\p\z\4\f\0\6\y\d\9\d\s\a\m\q\2\m\o\h\j\x\i\z\i\6\d\z\7\d\g\t\l\5\1\g\r\r\0\v\o\e\g\b\2\x\2\b\9\n\h\2\o\x\a\h\a\9\q\k\h\a\p\y\a\d\q\q\o\k\9\l\z\7\u\y\9\h\0\1\g\e\2\t\u\x\5\i\w\y\g\l\l\d\c\w\h\5\l\z\w\x\6\6\i\w\q\8\f\c\e\b\9\f\3\c\6\i\5\e\o\8\w\m\r\b\y\m\o\s\5\q\z\0\g\r\o\n\x\8\z\k\k\d\w\z\r\w\x\l\z\f\6\x\9\m\w\o\t\n\r\n\c\7\9\i\3\r\k\k\1\7\6\i\8\m\h\y\d\d\v\x\n\2\g\p\s\1\9\z\p\a\q\i\8\h\q\u\a\h\u\s\e\1\w\a\0\z\j\h\f\g\7\z\m\c\q\e\5\s\p\7\4\u\w\i\e\b\l\6\0\9\x\z\d\q\c\5\v\2\b\d\3\n\a\e\6\4\9\0\7\h\y\7\z\o\e\v\t\a\e\y\x\d\p\w\7\e\c\i\o\4\q\b\5\6\k\4\r\g\q\w\n\n\w\4\i\s\u\m\a\1\v\s\3\d\b\0\b\y\q\q\x\h\r\y\o\m\f\z\5\5\o\t\j\j\s\f\c\m\b\4\3\y\g\z\e\m\t\5\0\5\5\g\7\p\w\w\e\1\w\4\k\d\t\l\s\w\x\s\5\8\i\5\m\o\r\9\k\k\r\0\a\3\y\a\r\x\j\o\9\p\5\7\l\t\t\r\z\g\t\4\4\2\i\j\s\e\6\i\3\5\e\o\i\n\5\3\j\e\9\p\j\0\v\6\g\d\q\y\d\v\i\8\7\u\9\w\h\5\g\5\4\l\s\3\d\k\j\y\8\0\t\q\u\q\m\h\6\g\t\u\2\d\b\e\y\0\x\u\m\k\r\2\g\c\r\5\j\h\m\0\6\t\v\p\7\9\d\4\2\k\2\s\n\o\r\z\9\d\c\l\b ]] 00:07:05.150 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.150 13:26:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.150 [2024-11-20 13:26:16.977353] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:05.150 [2024-11-20 13:26:16.977451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:07:05.409 [2024-11-20 13:26:17.129349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.409 [2024-11-20 13:26:17.189558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.409 [2024-11-20 13:26:17.249052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.409  [2024-11-20T13:26:17.625Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.668 00:07:05.668 13:26:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 23icd5jjq0fjplgdo0rkbd7q8b39mp4pz4f06yd9dsamq2mohjxizi6dz7dgtl51grr0voegb2x2b9nh2oxaha9qkhapyadqqok9lz7uy9h01ge2tux5iwyglldcwh5lzwx66iwq8fceb9f3c6i5eo8wmrbymos5qz0gronx8zkkdwzrwxlzf6x9mwotnrnc79i3rkk176i8mhyddvxn2gps19zpaqi8hquahuse1wa0zjhfg7zmcqe5sp74uwiebl609xzdqc5v2bd3nae64907hy7zoevtaeyxdpw7ecio4qb56k4rgqwnnw4isuma1vs3db0byqqxhryomfz55otjjsfcmb43ygzemt5055g7pwwe1w4kdtlswxs58i5mor9kkr0a3yarxjo9p57lttrzgt442ijse6i35eoin53je9pj0v6gdqydvi87u9wh5g54ls3dkjy80tquqmh6gtu2dbey0xumkr2gcr5jhm06tvp79d42k2snorz9dclb == \2\3\i\c\d\5\j\j\q\0\f\j\p\l\g\d\o\0\r\k\b\d\7\q\8\b\3\9\m\p\4\p\z\4\f\0\6\y\d\9\d\s\a\m\q\2\m\o\h\j\x\i\z\i\6\d\z\7\d\g\t\l\5\1\g\r\r\0\v\o\e\g\b\2\x\2\b\9\n\h\2\o\x\a\h\a\9\q\k\h\a\p\y\a\d\q\q\o\k\9\l\z\7\u\y\9\h\0\1\g\e\2\t\u\x\5\i\w\y\g\l\l\d\c\w\h\5\l\z\w\x\6\6\i\w\q\8\f\c\e\b\9\f\3\c\6\i\5\e\o\8\w\m\r\b\y\m\o\s\5\q\z\0\g\r\o\n\x\8\z\k\k\d\w\z\r\w\x\l\z\f\6\x\9\m\w\o\t\n\r\n\c\7\9\i\3\r\k\k\1\7\6\i\8\m\h\y\d\d\v\x\n\2\g\p\s\1\9\z\p\a\q\i\8\h\q\u\a\h\u\s\e\1\w\a\0\z\j\h\f\g\7\z\m\c\q\e\5\s\p\7\4\u\w\i\e\b\l\6\0\9\x\z\d\q\c\5\v\2\b\d\3\n\a\e\6\4\9\0\7\h\y\7\z\o\e\v\t\a\e\y\x\d\p\w\7\e\c\i\o\4\q\b\5\6\k\4\r\g\q\w\n\n\w\4\i\s\u\m\a\1\v\s\3\d\b\0\b\y\q\q\x\h\r\y\o\m\f\z\5\5\o\t\j\j\s\f\c\m\b\4\3\y\g\z\e\m\t\5\0\5\5\g\7\p\w\w\e\1\w\4\k\d\t\l\s\w\x\s\5\8\i\5\m\o\r\9\k\k\r\0\a\3\y\a\r\x\j\o\9\p\5\7\l\t\t\r\z\g\t\4\4\2\i\j\s\e\6\i\3\5\e\o\i\n\5\3\j\e\9\p\j\0\v\6\g\d\q\y\d\v\i\8\7\u\9\w\h\5\g\5\4\l\s\3\d\k\j\y\8\0\t\q\u\q\m\h\6\g\t\u\2\d\b\e\y\0\x\u\m\k\r\2\g\c\r\5\j\h\m\0\6\t\v\p\7\9\d\4\2\k\2\s\n\o\r\z\9\d\c\l\b ]] 00:07:05.668 13:26:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.668 13:26:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:05.668 [2024-11-20 13:26:17.542649] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:05.668 [2024-11-20 13:26:17.542766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60474 ] 00:07:05.927 [2024-11-20 13:26:17.692110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.927 [2024-11-20 13:26:17.740984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.927 [2024-11-20 13:26:17.797812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.927  [2024-11-20T13:26:18.144Z] Copying: 512/512 [B] (average 250 kBps) 00:07:06.187 00:07:06.187 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 23icd5jjq0fjplgdo0rkbd7q8b39mp4pz4f06yd9dsamq2mohjxizi6dz7dgtl51grr0voegb2x2b9nh2oxaha9qkhapyadqqok9lz7uy9h01ge2tux5iwyglldcwh5lzwx66iwq8fceb9f3c6i5eo8wmrbymos5qz0gronx8zkkdwzrwxlzf6x9mwotnrnc79i3rkk176i8mhyddvxn2gps19zpaqi8hquahuse1wa0zjhfg7zmcqe5sp74uwiebl609xzdqc5v2bd3nae64907hy7zoevtaeyxdpw7ecio4qb56k4rgqwnnw4isuma1vs3db0byqqxhryomfz55otjjsfcmb43ygzemt5055g7pwwe1w4kdtlswxs58i5mor9kkr0a3yarxjo9p57lttrzgt442ijse6i35eoin53je9pj0v6gdqydvi87u9wh5g54ls3dkjy80tquqmh6gtu2dbey0xumkr2gcr5jhm06tvp79d42k2snorz9dclb == \2\3\i\c\d\5\j\j\q\0\f\j\p\l\g\d\o\0\r\k\b\d\7\q\8\b\3\9\m\p\4\p\z\4\f\0\6\y\d\9\d\s\a\m\q\2\m\o\h\j\x\i\z\i\6\d\z\7\d\g\t\l\5\1\g\r\r\0\v\o\e\g\b\2\x\2\b\9\n\h\2\o\x\a\h\a\9\q\k\h\a\p\y\a\d\q\q\o\k\9\l\z\7\u\y\9\h\0\1\g\e\2\t\u\x\5\i\w\y\g\l\l\d\c\w\h\5\l\z\w\x\6\6\i\w\q\8\f\c\e\b\9\f\3\c\6\i\5\e\o\8\w\m\r\b\y\m\o\s\5\q\z\0\g\r\o\n\x\8\z\k\k\d\w\z\r\w\x\l\z\f\6\x\9\m\w\o\t\n\r\n\c\7\9\i\3\r\k\k\1\7\6\i\8\m\h\y\d\d\v\x\n\2\g\p\s\1\9\z\p\a\q\i\8\h\q\u\a\h\u\s\e\1\w\a\0\z\j\h\f\g\7\z\m\c\q\e\5\s\p\7\4\u\w\i\e\b\l\6\0\9\x\z\d\q\c\5\v\2\b\d\3\n\a\e\6\4\9\0\7\h\y\7\z\o\e\v\t\a\e\y\x\d\p\w\7\e\c\i\o\4\q\b\5\6\k\4\r\g\q\w\n\n\w\4\i\s\u\m\a\1\v\s\3\d\b\0\b\y\q\q\x\h\r\y\o\m\f\z\5\5\o\t\j\j\s\f\c\m\b\4\3\y\g\z\e\m\t\5\0\5\5\g\7\p\w\w\e\1\w\4\k\d\t\l\s\w\x\s\5\8\i\5\m\o\r\9\k\k\r\0\a\3\y\a\r\x\j\o\9\p\5\7\l\t\t\r\z\g\t\4\4\2\i\j\s\e\6\i\3\5\e\o\i\n\5\3\j\e\9\p\j\0\v\6\g\d\q\y\d\v\i\8\7\u\9\w\h\5\g\5\4\l\s\3\d\k\j\y\8\0\t\q\u\q\m\h\6\g\t\u\2\d\b\e\y\0\x\u\m\k\r\2\g\c\r\5\j\h\m\0\6\t\v\p\7\9\d\4\2\k\2\s\n\o\r\z\9\d\c\l\b ]] 00:07:06.187 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.187 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.187 [2024-11-20 13:26:18.096940] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:06.187 [2024-11-20 13:26:18.097038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:07:06.446 [2024-11-20 13:26:18.252871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.446 [2024-11-20 13:26:18.319988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.446 [2024-11-20 13:26:18.383935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.704  [2024-11-20T13:26:18.661Z] Copying: 512/512 [B] (average 250 kBps) 00:07:06.704 00:07:06.704 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 23icd5jjq0fjplgdo0rkbd7q8b39mp4pz4f06yd9dsamq2mohjxizi6dz7dgtl51grr0voegb2x2b9nh2oxaha9qkhapyadqqok9lz7uy9h01ge2tux5iwyglldcwh5lzwx66iwq8fceb9f3c6i5eo8wmrbymos5qz0gronx8zkkdwzrwxlzf6x9mwotnrnc79i3rkk176i8mhyddvxn2gps19zpaqi8hquahuse1wa0zjhfg7zmcqe5sp74uwiebl609xzdqc5v2bd3nae64907hy7zoevtaeyxdpw7ecio4qb56k4rgqwnnw4isuma1vs3db0byqqxhryomfz55otjjsfcmb43ygzemt5055g7pwwe1w4kdtlswxs58i5mor9kkr0a3yarxjo9p57lttrzgt442ijse6i35eoin53je9pj0v6gdqydvi87u9wh5g54ls3dkjy80tquqmh6gtu2dbey0xumkr2gcr5jhm06tvp79d42k2snorz9dclb == \2\3\i\c\d\5\j\j\q\0\f\j\p\l\g\d\o\0\r\k\b\d\7\q\8\b\3\9\m\p\4\p\z\4\f\0\6\y\d\9\d\s\a\m\q\2\m\o\h\j\x\i\z\i\6\d\z\7\d\g\t\l\5\1\g\r\r\0\v\o\e\g\b\2\x\2\b\9\n\h\2\o\x\a\h\a\9\q\k\h\a\p\y\a\d\q\q\o\k\9\l\z\7\u\y\9\h\0\1\g\e\2\t\u\x\5\i\w\y\g\l\l\d\c\w\h\5\l\z\w\x\6\6\i\w\q\8\f\c\e\b\9\f\3\c\6\i\5\e\o\8\w\m\r\b\y\m\o\s\5\q\z\0\g\r\o\n\x\8\z\k\k\d\w\z\r\w\x\l\z\f\6\x\9\m\w\o\t\n\r\n\c\7\9\i\3\r\k\k\1\7\6\i\8\m\h\y\d\d\v\x\n\2\g\p\s\1\9\z\p\a\q\i\8\h\q\u\a\h\u\s\e\1\w\a\0\z\j\h\f\g\7\z\m\c\q\e\5\s\p\7\4\u\w\i\e\b\l\6\0\9\x\z\d\q\c\5\v\2\b\d\3\n\a\e\6\4\9\0\7\h\y\7\z\o\e\v\t\a\e\y\x\d\p\w\7\e\c\i\o\4\q\b\5\6\k\4\r\g\q\w\n\n\w\4\i\s\u\m\a\1\v\s\3\d\b\0\b\y\q\q\x\h\r\y\o\m\f\z\5\5\o\t\j\j\s\f\c\m\b\4\3\y\g\z\e\m\t\5\0\5\5\g\7\p\w\w\e\1\w\4\k\d\t\l\s\w\x\s\5\8\i\5\m\o\r\9\k\k\r\0\a\3\y\a\r\x\j\o\9\p\5\7\l\t\t\r\z\g\t\4\4\2\i\j\s\e\6\i\3\5\e\o\i\n\5\3\j\e\9\p\j\0\v\6\g\d\q\y\d\v\i\8\7\u\9\w\h\5\g\5\4\l\s\3\d\k\j\y\8\0\t\q\u\q\m\h\6\g\t\u\2\d\b\e\y\0\x\u\m\k\r\2\g\c\r\5\j\h\m\0\6\t\v\p\7\9\d\4\2\k\2\s\n\o\r\z\9\d\c\l\b ]] 00:07:06.704 00:07:06.704 real 0m4.583s 00:07:06.704 user 0m2.483s 00:07:06.704 sys 0m2.363s 00:07:06.704 ************************************ 00:07:06.704 END TEST dd_flags_misc 00:07:06.704 ************************************ 00:07:06.704 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.704 13:26:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:06.963 * Second test run, disabling liburing, forcing AIO 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.963 ************************************ 00:07:06.963 START TEST dd_flag_append_forced_aio 00:07:06.963 ************************************ 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=yuf8kdcjvtwz5roxirkb7f8tjrrm5s6c 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=3c1jptb5juk3bupbshu6srfrd2kc5pkn 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s yuf8kdcjvtwz5roxirkb7f8tjrrm5s6c 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 3c1jptb5juk3bupbshu6srfrd2kc5pkn 00:07:06.963 13:26:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:06.963 [2024-11-20 13:26:18.741967] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:06.963 [2024-11-20 13:26:18.742087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60512 ] 00:07:06.963 [2024-11-20 13:26:18.886607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.222 [2024-11-20 13:26:18.951258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.222 [2024-11-20 13:26:19.011677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.222  [2024-11-20T13:26:19.438Z] Copying: 32/32 [B] (average 31 kBps) 00:07:07.481 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 3c1jptb5juk3bupbshu6srfrd2kc5pknyuf8kdcjvtwz5roxirkb7f8tjrrm5s6c == \3\c\1\j\p\t\b\5\j\u\k\3\b\u\p\b\s\h\u\6\s\r\f\r\d\2\k\c\5\p\k\n\y\u\f\8\k\d\c\j\v\t\w\z\5\r\o\x\i\r\k\b\7\f\8\t\j\r\r\m\5\s\6\c ]] 00:07:07.481 00:07:07.481 real 0m0.600s 00:07:07.481 user 0m0.324s 00:07:07.481 sys 0m0.151s 00:07:07.481 ************************************ 00:07:07.481 END TEST dd_flag_append_forced_aio 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.481 ************************************ 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.481 ************************************ 00:07:07.481 START TEST dd_flag_directory_forced_aio 00:07:07.481 ************************************ 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.481 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.481 [2024-11-20 13:26:19.400207] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:07.481 [2024-11-20 13:26:19.400311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:07:07.740 [2024-11-20 13:26:19.551475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.740 [2024-11-20 13:26:19.606179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.740 [2024-11-20 13:26:19.664575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.999 [2024-11-20 13:26:19.705648] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.000 [2024-11-20 13:26:19.705737] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.000 [2024-11-20 13:26:19.705771] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.000 [2024-11-20 13:26:19.835487] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.000 13:26:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:08.259 [2024-11-20 13:26:19.964777] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:08.259 [2024-11-20 13:26:19.964886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60548 ] 00:07:08.259 [2024-11-20 13:26:20.115470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.259 [2024-11-20 13:26:20.175500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.517 [2024-11-20 13:26:20.234541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.517 [2024-11-20 13:26:20.274672] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.517 [2024-11-20 13:26:20.274738] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.517 [2024-11-20 13:26:20.274765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.517 [2024-11-20 13:26:20.398748] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.517 00:07:08.517 real 0m1.120s 00:07:08.517 user 0m0.611s 00:07:08.517 sys 0m0.300s 00:07:08.517 ************************************ 00:07:08.517 END TEST dd_flag_directory_forced_aio 00:07:08.517 ************************************ 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.517 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:08.775 ************************************ 00:07:08.775 START TEST dd_flag_nofollow_forced_aio 00:07:08.775 ************************************ 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.775 13:26:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.775 [2024-11-20 13:26:20.580850] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:08.775 [2024-11-20 13:26:20.580941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:07:09.033 [2024-11-20 13:26:20.730843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.033 [2024-11-20 13:26:20.796264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.033 [2024-11-20 13:26:20.855533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.033 [2024-11-20 13:26:20.896558] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:09.033 [2024-11-20 13:26:20.896622] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:09.033 [2024-11-20 13:26:20.896671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.291 [2024-11-20 13:26:21.026078] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.291 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:09.291 [2024-11-20 13:26:21.147662] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:09.291 [2024-11-20 13:26:21.147782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60592 ] 00:07:09.550 [2024-11-20 13:26:21.290386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.550 [2024-11-20 13:26:21.350869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.550 [2024-11-20 13:26:21.410781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.550 [2024-11-20 13:26:21.449352] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:09.550 [2024-11-20 13:26:21.449425] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:09.550 [2024-11-20 13:26:21.449461] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.808 [2024-11-20 13:26:21.572737] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.808 13:26:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.808 [2024-11-20 13:26:21.696448] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:09.809 [2024-11-20 13:26:21.696574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60599 ] 00:07:10.068 [2024-11-20 13:26:21.846092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.068 [2024-11-20 13:26:21.910339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.068 [2024-11-20 13:26:21.971613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.068  [2024-11-20T13:26:22.284Z] Copying: 512/512 [B] (average 500 kBps) 00:07:10.327 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jce92rszcm6n96alqd5573q1z0og9ua7s5mcmrpcv7xnl0hqvyj63mw67dai014csfnpfd3y7n9qgckvzn9n3h95tieyu8p5fm3pflaws1ujpdwkwu7ak4geiw1ifo3hrnamvyjr4fvi5pu30kxpyzdq5blzjcifsyy4vqze7dq8sbkx4gdiryrlt9z1oe8to4ppaexqwiv2yewjvhd8vk7s2l9k7oxoxrxfjzcnq37v836ja6ejq2tk1n88kd5jnz70gn5axb7cik4k773vb7105hcszc89mprtn40rjg0yvn8fnfkn6q4459gfl6d8ouga5rwsdv2c0umkfvbwa15pgjdhkrneg8fsh3q1dn08axxfbprb0kach492kfqyimqbbj8vsvlm7ihmkwo52xrlp264nrwjo75fx1s2jlam23g1gv16enalhlz9wjko66f0xxlizr5z7i1h3q00zr794u4zuk7i3hsyx7fqb9g07rqpb2d3hruowi965xfb == \j\c\e\9\2\r\s\z\c\m\6\n\9\6\a\l\q\d\5\5\7\3\q\1\z\0\o\g\9\u\a\7\s\5\m\c\m\r\p\c\v\7\x\n\l\0\h\q\v\y\j\6\3\m\w\6\7\d\a\i\0\1\4\c\s\f\n\p\f\d\3\y\7\n\9\q\g\c\k\v\z\n\9\n\3\h\9\5\t\i\e\y\u\8\p\5\f\m\3\p\f\l\a\w\s\1\u\j\p\d\w\k\w\u\7\a\k\4\g\e\i\w\1\i\f\o\3\h\r\n\a\m\v\y\j\r\4\f\v\i\5\p\u\3\0\k\x\p\y\z\d\q\5\b\l\z\j\c\i\f\s\y\y\4\v\q\z\e\7\d\q\8\s\b\k\x\4\g\d\i\r\y\r\l\t\9\z\1\o\e\8\t\o\4\p\p\a\e\x\q\w\i\v\2\y\e\w\j\v\h\d\8\v\k\7\s\2\l\9\k\7\o\x\o\x\r\x\f\j\z\c\n\q\3\7\v\8\3\6\j\a\6\e\j\q\2\t\k\1\n\8\8\k\d\5\j\n\z\7\0\g\n\5\a\x\b\7\c\i\k\4\k\7\7\3\v\b\7\1\0\5\h\c\s\z\c\8\9\m\p\r\t\n\4\0\r\j\g\0\y\v\n\8\f\n\f\k\n\6\q\4\4\5\9\g\f\l\6\d\8\o\u\g\a\5\r\w\s\d\v\2\c\0\u\m\k\f\v\b\w\a\1\5\p\g\j\d\h\k\r\n\e\g\8\f\s\h\3\q\1\d\n\0\8\a\x\x\f\b\p\r\b\0\k\a\c\h\4\9\2\k\f\q\y\i\m\q\b\b\j\8\v\s\v\l\m\7\i\h\m\k\w\o\5\2\x\r\l\p\2\6\4\n\r\w\j\o\7\5\f\x\1\s\2\j\l\a\m\2\3\g\1\g\v\1\6\e\n\a\l\h\l\z\9\w\j\k\o\6\6\f\0\x\x\l\i\z\r\5\z\7\i\1\h\3\q\0\0\z\r\7\9\4\u\4\z\u\k\7\i\3\h\s\y\x\7\f\q\b\9\g\0\7\r\q\p\b\2\d\3\h\r\u\o\w\i\9\6\5\x\f\b ]] 00:07:10.327 00:07:10.327 real 0m1.714s 00:07:10.327 user 0m0.936s 00:07:10.327 sys 0m0.447s 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.327 ************************************ 00:07:10.327 END TEST dd_flag_nofollow_forced_aio 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.327 ************************************ 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:10.327 ************************************ 00:07:10.327 START TEST dd_flag_noatime_forced_aio 00:07:10.327 ************************************ 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:10.327 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.586 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.586 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732109182 00:07:10.586 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.586 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732109182 00:07:10.586 13:26:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:11.521 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.521 [2024-11-20 13:26:23.360586] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:11.521 [2024-11-20 13:26:23.360715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:07:11.780 [2024-11-20 13:26:23.512788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.780 [2024-11-20 13:26:23.596895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.780 [2024-11-20 13:26:23.657999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.780  [2024-11-20T13:26:23.995Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.038 00:07:12.038 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.039 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732109182 )) 00:07:12.039 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.039 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732109182 )) 00:07:12.039 13:26:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.039 [2024-11-20 13:26:23.985541] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:12.039 [2024-11-20 13:26:23.985675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:07:12.298 [2024-11-20 13:26:24.134854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.298 [2024-11-20 13:26:24.198305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.556 [2024-11-20 13:26:24.254481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.556  [2024-11-20T13:26:24.513Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.556 00:07:12.556 13:26:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732109184 )) 00:07:12.815 00:07:12.815 real 0m2.234s 00:07:12.815 user 0m0.666s 00:07:12.815 sys 0m0.329s 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.815 ************************************ 00:07:12.815 END TEST dd_flag_noatime_forced_aio 00:07:12.815 ************************************ 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 ************************************ 00:07:12.815 START TEST dd_flags_misc_forced_aio 00:07:12.815 ************************************ 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.815 13:26:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.815 [2024-11-20 13:26:24.621492] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:12.815 [2024-11-20 13:26:24.621577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:07:12.815 [2024-11-20 13:26:24.769432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.074 [2024-11-20 13:26:24.847109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.074 [2024-11-20 13:26:24.915131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.074  [2024-11-20T13:26:25.322Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.365 00:07:13.365 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ egm1cypwpegjoj5a9xfg9u2t359g8yggk6nrqzejb5ocgljqj5h3lnmhdrp3psbtg8v6rq8ubf6frlkqse0mmamiafa17dqw98gv9yx8dvyfa99fwlpaz1muppmsfy4tyga45z6jttglcwgawe7uxl7hsf77auw3jty9qhf37xfmyuoe6cy1nbduwjfulkar8nlnvyh5hfjnjgvr3dr1g473uxwkvdbf12fjth7usic4a09k7ygi0k5cjgc6rg5p884jp9xu16c71006bmi3gdj26b7xl13c8pbgmbj82ujgobcq6xa8b3lvsn8f6i2dsbh6hg4sc2lobkd4bg63i2upo89ay42y3sy3vmoh3gy7y3ulbnihoa5fq7h3phgieuraeyxgjr1v4jctlrx2pyjrtb884lz2niyorkx76jjqm183yhqxleh3ryotd837eox5e3fhsfoiekbu11pv33bpcj5f553pv8hsswaqeve2xaywukatdg4cgvp05bpt == \e\g\m\1\c\y\p\w\p\e\g\j\o\j\5\a\9\x\f\g\9\u\2\t\3\5\9\g\8\y\g\g\k\6\n\r\q\z\e\j\b\5\o\c\g\l\j\q\j\5\h\3\l\n\m\h\d\r\p\3\p\s\b\t\g\8\v\6\r\q\8\u\b\f\6\f\r\l\k\q\s\e\0\m\m\a\m\i\a\f\a\1\7\d\q\w\9\8\g\v\9\y\x\8\d\v\y\f\a\9\9\f\w\l\p\a\z\1\m\u\p\p\m\s\f\y\4\t\y\g\a\4\5\z\6\j\t\t\g\l\c\w\g\a\w\e\7\u\x\l\7\h\s\f\7\7\a\u\w\3\j\t\y\9\q\h\f\3\7\x\f\m\y\u\o\e\6\c\y\1\n\b\d\u\w\j\f\u\l\k\a\r\8\n\l\n\v\y\h\5\h\f\j\n\j\g\v\r\3\d\r\1\g\4\7\3\u\x\w\k\v\d\b\f\1\2\f\j\t\h\7\u\s\i\c\4\a\0\9\k\7\y\g\i\0\k\5\c\j\g\c\6\r\g\5\p\8\8\4\j\p\9\x\u\1\6\c\7\1\0\0\6\b\m\i\3\g\d\j\2\6\b\7\x\l\1\3\c\8\p\b\g\m\b\j\8\2\u\j\g\o\b\c\q\6\x\a\8\b\3\l\v\s\n\8\f\6\i\2\d\s\b\h\6\h\g\4\s\c\2\l\o\b\k\d\4\b\g\6\3\i\2\u\p\o\8\9\a\y\4\2\y\3\s\y\3\v\m\o\h\3\g\y\7\y\3\u\l\b\n\i\h\o\a\5\f\q\7\h\3\p\h\g\i\e\u\r\a\e\y\x\g\j\r\1\v\4\j\c\t\l\r\x\2\p\y\j\r\t\b\8\8\4\l\z\2\n\i\y\o\r\k\x\7\6\j\j\q\m\1\8\3\y\h\q\x\l\e\h\3\r\y\o\t\d\8\3\7\e\o\x\5\e\3\f\h\s\f\o\i\e\k\b\u\1\1\p\v\3\3\b\p\c\j\5\f\5\5\3\p\v\8\h\s\s\w\a\q\e\v\e\2\x\a\y\w\u\k\a\t\d\g\4\c\g\v\p\0\5\b\p\t ]] 00:07:13.365 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.365 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:13.365 [2024-11-20 13:26:25.225963] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:13.365 [2024-11-20 13:26:25.226052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ] 00:07:13.624 [2024-11-20 13:26:25.367503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.624 [2024-11-20 13:26:25.426907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.624 [2024-11-20 13:26:25.485095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.624  [2024-11-20T13:26:25.840Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.883 00:07:13.883 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ egm1cypwpegjoj5a9xfg9u2t359g8yggk6nrqzejb5ocgljqj5h3lnmhdrp3psbtg8v6rq8ubf6frlkqse0mmamiafa17dqw98gv9yx8dvyfa99fwlpaz1muppmsfy4tyga45z6jttglcwgawe7uxl7hsf77auw3jty9qhf37xfmyuoe6cy1nbduwjfulkar8nlnvyh5hfjnjgvr3dr1g473uxwkvdbf12fjth7usic4a09k7ygi0k5cjgc6rg5p884jp9xu16c71006bmi3gdj26b7xl13c8pbgmbj82ujgobcq6xa8b3lvsn8f6i2dsbh6hg4sc2lobkd4bg63i2upo89ay42y3sy3vmoh3gy7y3ulbnihoa5fq7h3phgieuraeyxgjr1v4jctlrx2pyjrtb884lz2niyorkx76jjqm183yhqxleh3ryotd837eox5e3fhsfoiekbu11pv33bpcj5f553pv8hsswaqeve2xaywukatdg4cgvp05bpt == \e\g\m\1\c\y\p\w\p\e\g\j\o\j\5\a\9\x\f\g\9\u\2\t\3\5\9\g\8\y\g\g\k\6\n\r\q\z\e\j\b\5\o\c\g\l\j\q\j\5\h\3\l\n\m\h\d\r\p\3\p\s\b\t\g\8\v\6\r\q\8\u\b\f\6\f\r\l\k\q\s\e\0\m\m\a\m\i\a\f\a\1\7\d\q\w\9\8\g\v\9\y\x\8\d\v\y\f\a\9\9\f\w\l\p\a\z\1\m\u\p\p\m\s\f\y\4\t\y\g\a\4\5\z\6\j\t\t\g\l\c\w\g\a\w\e\7\u\x\l\7\h\s\f\7\7\a\u\w\3\j\t\y\9\q\h\f\3\7\x\f\m\y\u\o\e\6\c\y\1\n\b\d\u\w\j\f\u\l\k\a\r\8\n\l\n\v\y\h\5\h\f\j\n\j\g\v\r\3\d\r\1\g\4\7\3\u\x\w\k\v\d\b\f\1\2\f\j\t\h\7\u\s\i\c\4\a\0\9\k\7\y\g\i\0\k\5\c\j\g\c\6\r\g\5\p\8\8\4\j\p\9\x\u\1\6\c\7\1\0\0\6\b\m\i\3\g\d\j\2\6\b\7\x\l\1\3\c\8\p\b\g\m\b\j\8\2\u\j\g\o\b\c\q\6\x\a\8\b\3\l\v\s\n\8\f\6\i\2\d\s\b\h\6\h\g\4\s\c\2\l\o\b\k\d\4\b\g\6\3\i\2\u\p\o\8\9\a\y\4\2\y\3\s\y\3\v\m\o\h\3\g\y\7\y\3\u\l\b\n\i\h\o\a\5\f\q\7\h\3\p\h\g\i\e\u\r\a\e\y\x\g\j\r\1\v\4\j\c\t\l\r\x\2\p\y\j\r\t\b\8\8\4\l\z\2\n\i\y\o\r\k\x\7\6\j\j\q\m\1\8\3\y\h\q\x\l\e\h\3\r\y\o\t\d\8\3\7\e\o\x\5\e\3\f\h\s\f\o\i\e\k\b\u\1\1\p\v\3\3\b\p\c\j\5\f\5\5\3\p\v\8\h\s\s\w\a\q\e\v\e\2\x\a\y\w\u\k\a\t\d\g\4\c\g\v\p\0\5\b\p\t ]] 00:07:13.883 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.883 13:26:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.883 [2024-11-20 13:26:25.798165] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:13.883 [2024-11-20 13:26:25.798298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60697 ] 00:07:14.142 [2024-11-20 13:26:25.951995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.142 [2024-11-20 13:26:26.020001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.142 [2024-11-20 13:26:26.081921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.400  [2024-11-20T13:26:26.357Z] Copying: 512/512 [B] (average 166 kBps) 00:07:14.400 00:07:14.659 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ egm1cypwpegjoj5a9xfg9u2t359g8yggk6nrqzejb5ocgljqj5h3lnmhdrp3psbtg8v6rq8ubf6frlkqse0mmamiafa17dqw98gv9yx8dvyfa99fwlpaz1muppmsfy4tyga45z6jttglcwgawe7uxl7hsf77auw3jty9qhf37xfmyuoe6cy1nbduwjfulkar8nlnvyh5hfjnjgvr3dr1g473uxwkvdbf12fjth7usic4a09k7ygi0k5cjgc6rg5p884jp9xu16c71006bmi3gdj26b7xl13c8pbgmbj82ujgobcq6xa8b3lvsn8f6i2dsbh6hg4sc2lobkd4bg63i2upo89ay42y3sy3vmoh3gy7y3ulbnihoa5fq7h3phgieuraeyxgjr1v4jctlrx2pyjrtb884lz2niyorkx76jjqm183yhqxleh3ryotd837eox5e3fhsfoiekbu11pv33bpcj5f553pv8hsswaqeve2xaywukatdg4cgvp05bpt == \e\g\m\1\c\y\p\w\p\e\g\j\o\j\5\a\9\x\f\g\9\u\2\t\3\5\9\g\8\y\g\g\k\6\n\r\q\z\e\j\b\5\o\c\g\l\j\q\j\5\h\3\l\n\m\h\d\r\p\3\p\s\b\t\g\8\v\6\r\q\8\u\b\f\6\f\r\l\k\q\s\e\0\m\m\a\m\i\a\f\a\1\7\d\q\w\9\8\g\v\9\y\x\8\d\v\y\f\a\9\9\f\w\l\p\a\z\1\m\u\p\p\m\s\f\y\4\t\y\g\a\4\5\z\6\j\t\t\g\l\c\w\g\a\w\e\7\u\x\l\7\h\s\f\7\7\a\u\w\3\j\t\y\9\q\h\f\3\7\x\f\m\y\u\o\e\6\c\y\1\n\b\d\u\w\j\f\u\l\k\a\r\8\n\l\n\v\y\h\5\h\f\j\n\j\g\v\r\3\d\r\1\g\4\7\3\u\x\w\k\v\d\b\f\1\2\f\j\t\h\7\u\s\i\c\4\a\0\9\k\7\y\g\i\0\k\5\c\j\g\c\6\r\g\5\p\8\8\4\j\p\9\x\u\1\6\c\7\1\0\0\6\b\m\i\3\g\d\j\2\6\b\7\x\l\1\3\c\8\p\b\g\m\b\j\8\2\u\j\g\o\b\c\q\6\x\a\8\b\3\l\v\s\n\8\f\6\i\2\d\s\b\h\6\h\g\4\s\c\2\l\o\b\k\d\4\b\g\6\3\i\2\u\p\o\8\9\a\y\4\2\y\3\s\y\3\v\m\o\h\3\g\y\7\y\3\u\l\b\n\i\h\o\a\5\f\q\7\h\3\p\h\g\i\e\u\r\a\e\y\x\g\j\r\1\v\4\j\c\t\l\r\x\2\p\y\j\r\t\b\8\8\4\l\z\2\n\i\y\o\r\k\x\7\6\j\j\q\m\1\8\3\y\h\q\x\l\e\h\3\r\y\o\t\d\8\3\7\e\o\x\5\e\3\f\h\s\f\o\i\e\k\b\u\1\1\p\v\3\3\b\p\c\j\5\f\5\5\3\p\v\8\h\s\s\w\a\q\e\v\e\2\x\a\y\w\u\k\a\t\d\g\4\c\g\v\p\0\5\b\p\t ]] 00:07:14.659 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.659 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.659 [2024-11-20 13:26:26.415226] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:14.659 [2024-11-20 13:26:26.415333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:07:14.659 [2024-11-20 13:26:26.564366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.917 [2024-11-20 13:26:26.625172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.917 [2024-11-20 13:26:26.685785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.917  [2024-11-20T13:26:27.133Z] Copying: 512/512 [B] (average 500 kBps) 00:07:15.176 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ egm1cypwpegjoj5a9xfg9u2t359g8yggk6nrqzejb5ocgljqj5h3lnmhdrp3psbtg8v6rq8ubf6frlkqse0mmamiafa17dqw98gv9yx8dvyfa99fwlpaz1muppmsfy4tyga45z6jttglcwgawe7uxl7hsf77auw3jty9qhf37xfmyuoe6cy1nbduwjfulkar8nlnvyh5hfjnjgvr3dr1g473uxwkvdbf12fjth7usic4a09k7ygi0k5cjgc6rg5p884jp9xu16c71006bmi3gdj26b7xl13c8pbgmbj82ujgobcq6xa8b3lvsn8f6i2dsbh6hg4sc2lobkd4bg63i2upo89ay42y3sy3vmoh3gy7y3ulbnihoa5fq7h3phgieuraeyxgjr1v4jctlrx2pyjrtb884lz2niyorkx76jjqm183yhqxleh3ryotd837eox5e3fhsfoiekbu11pv33bpcj5f553pv8hsswaqeve2xaywukatdg4cgvp05bpt == \e\g\m\1\c\y\p\w\p\e\g\j\o\j\5\a\9\x\f\g\9\u\2\t\3\5\9\g\8\y\g\g\k\6\n\r\q\z\e\j\b\5\o\c\g\l\j\q\j\5\h\3\l\n\m\h\d\r\p\3\p\s\b\t\g\8\v\6\r\q\8\u\b\f\6\f\r\l\k\q\s\e\0\m\m\a\m\i\a\f\a\1\7\d\q\w\9\8\g\v\9\y\x\8\d\v\y\f\a\9\9\f\w\l\p\a\z\1\m\u\p\p\m\s\f\y\4\t\y\g\a\4\5\z\6\j\t\t\g\l\c\w\g\a\w\e\7\u\x\l\7\h\s\f\7\7\a\u\w\3\j\t\y\9\q\h\f\3\7\x\f\m\y\u\o\e\6\c\y\1\n\b\d\u\w\j\f\u\l\k\a\r\8\n\l\n\v\y\h\5\h\f\j\n\j\g\v\r\3\d\r\1\g\4\7\3\u\x\w\k\v\d\b\f\1\2\f\j\t\h\7\u\s\i\c\4\a\0\9\k\7\y\g\i\0\k\5\c\j\g\c\6\r\g\5\p\8\8\4\j\p\9\x\u\1\6\c\7\1\0\0\6\b\m\i\3\g\d\j\2\6\b\7\x\l\1\3\c\8\p\b\g\m\b\j\8\2\u\j\g\o\b\c\q\6\x\a\8\b\3\l\v\s\n\8\f\6\i\2\d\s\b\h\6\h\g\4\s\c\2\l\o\b\k\d\4\b\g\6\3\i\2\u\p\o\8\9\a\y\4\2\y\3\s\y\3\v\m\o\h\3\g\y\7\y\3\u\l\b\n\i\h\o\a\5\f\q\7\h\3\p\h\g\i\e\u\r\a\e\y\x\g\j\r\1\v\4\j\c\t\l\r\x\2\p\y\j\r\t\b\8\8\4\l\z\2\n\i\y\o\r\k\x\7\6\j\j\q\m\1\8\3\y\h\q\x\l\e\h\3\r\y\o\t\d\8\3\7\e\o\x\5\e\3\f\h\s\f\o\i\e\k\b\u\1\1\p\v\3\3\b\p\c\j\5\f\5\5\3\p\v\8\h\s\s\w\a\q\e\v\e\2\x\a\y\w\u\k\a\t\d\g\4\c\g\v\p\0\5\b\p\t ]] 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.176 13:26:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:15.176 [2024-11-20 13:26:27.037726] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:15.176 [2024-11-20 13:26:27.037881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:07:15.434 [2024-11-20 13:26:27.194523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.434 [2024-11-20 13:26:27.256401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.434 [2024-11-20 13:26:27.312429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.434  [2024-11-20T13:26:27.650Z] Copying: 512/512 [B] (average 500 kBps) 00:07:15.693 00:07:15.693 13:26:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cs76hu384ntk540ivqzlnnum51xwxpi25ag0g64q997vryk0o3nc1s9sb3wpojtxd89qdwm65sceuolwwxpmgru81eoyhvbwjy59emq4diaxy8awmtizf2ml5o2jz6i8wtljaapx273mbe35gg2rfrbm3cukyqpmaegez6oxfcw9jho8f5rh31q44o2spjxryudgyq9p9e4cg4i5907lu5t8hzws9dhonistzkp76pgswpmv0eqmtbqm2n271bczz1e4x2mpkzpaev5idmb9h7poorgs7qzvxmv2hj62koajg99feyhvy3hoonvb3k78218kr8eyaqnnpbmhla5un8n93tjdvc4c39e8nuv8nf8v6ogl1ex8zl2ss6380iop9fvvyp4vsv4jg0fcd25fcarl88sokhulygzfg8bfd814v223xw9mz61qe722l38tnmkxe0zmhk4ad88ly0bz3c4rj8zhqrzmemxjc3w3oyvoj9vp6rjxq8au9gkud4ok == \c\s\7\6\h\u\3\8\4\n\t\k\5\4\0\i\v\q\z\l\n\n\u\m\5\1\x\w\x\p\i\2\5\a\g\0\g\6\4\q\9\9\7\v\r\y\k\0\o\3\n\c\1\s\9\s\b\3\w\p\o\j\t\x\d\8\9\q\d\w\m\6\5\s\c\e\u\o\l\w\w\x\p\m\g\r\u\8\1\e\o\y\h\v\b\w\j\y\5\9\e\m\q\4\d\i\a\x\y\8\a\w\m\t\i\z\f\2\m\l\5\o\2\j\z\6\i\8\w\t\l\j\a\a\p\x\2\7\3\m\b\e\3\5\g\g\2\r\f\r\b\m\3\c\u\k\y\q\p\m\a\e\g\e\z\6\o\x\f\c\w\9\j\h\o\8\f\5\r\h\3\1\q\4\4\o\2\s\p\j\x\r\y\u\d\g\y\q\9\p\9\e\4\c\g\4\i\5\9\0\7\l\u\5\t\8\h\z\w\s\9\d\h\o\n\i\s\t\z\k\p\7\6\p\g\s\w\p\m\v\0\e\q\m\t\b\q\m\2\n\2\7\1\b\c\z\z\1\e\4\x\2\m\p\k\z\p\a\e\v\5\i\d\m\b\9\h\7\p\o\o\r\g\s\7\q\z\v\x\m\v\2\h\j\6\2\k\o\a\j\g\9\9\f\e\y\h\v\y\3\h\o\o\n\v\b\3\k\7\8\2\1\8\k\r\8\e\y\a\q\n\n\p\b\m\h\l\a\5\u\n\8\n\9\3\t\j\d\v\c\4\c\3\9\e\8\n\u\v\8\n\f\8\v\6\o\g\l\1\e\x\8\z\l\2\s\s\6\3\8\0\i\o\p\9\f\v\v\y\p\4\v\s\v\4\j\g\0\f\c\d\2\5\f\c\a\r\l\8\8\s\o\k\h\u\l\y\g\z\f\g\8\b\f\d\8\1\4\v\2\2\3\x\w\9\m\z\6\1\q\e\7\2\2\l\3\8\t\n\m\k\x\e\0\z\m\h\k\4\a\d\8\8\l\y\0\b\z\3\c\4\r\j\8\z\h\q\r\z\m\e\m\x\j\c\3\w\3\o\y\v\o\j\9\v\p\6\r\j\x\q\8\a\u\9\g\k\u\d\4\o\k ]] 00:07:15.693 13:26:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.693 13:26:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:15.693 [2024-11-20 13:26:27.644867] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:15.693 [2024-11-20 13:26:27.645040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60721 ] 00:07:15.952 [2024-11-20 13:26:27.796742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.952 [2024-11-20 13:26:27.863800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.211 [2024-11-20 13:26:27.924567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.211  [2024-11-20T13:26:28.426Z] Copying: 512/512 [B] (average 500 kBps) 00:07:16.469 00:07:16.470 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cs76hu384ntk540ivqzlnnum51xwxpi25ag0g64q997vryk0o3nc1s9sb3wpojtxd89qdwm65sceuolwwxpmgru81eoyhvbwjy59emq4diaxy8awmtizf2ml5o2jz6i8wtljaapx273mbe35gg2rfrbm3cukyqpmaegez6oxfcw9jho8f5rh31q44o2spjxryudgyq9p9e4cg4i5907lu5t8hzws9dhonistzkp76pgswpmv0eqmtbqm2n271bczz1e4x2mpkzpaev5idmb9h7poorgs7qzvxmv2hj62koajg99feyhvy3hoonvb3k78218kr8eyaqnnpbmhla5un8n93tjdvc4c39e8nuv8nf8v6ogl1ex8zl2ss6380iop9fvvyp4vsv4jg0fcd25fcarl88sokhulygzfg8bfd814v223xw9mz61qe722l38tnmkxe0zmhk4ad88ly0bz3c4rj8zhqrzmemxjc3w3oyvoj9vp6rjxq8au9gkud4ok == \c\s\7\6\h\u\3\8\4\n\t\k\5\4\0\i\v\q\z\l\n\n\u\m\5\1\x\w\x\p\i\2\5\a\g\0\g\6\4\q\9\9\7\v\r\y\k\0\o\3\n\c\1\s\9\s\b\3\w\p\o\j\t\x\d\8\9\q\d\w\m\6\5\s\c\e\u\o\l\w\w\x\p\m\g\r\u\8\1\e\o\y\h\v\b\w\j\y\5\9\e\m\q\4\d\i\a\x\y\8\a\w\m\t\i\z\f\2\m\l\5\o\2\j\z\6\i\8\w\t\l\j\a\a\p\x\2\7\3\m\b\e\3\5\g\g\2\r\f\r\b\m\3\c\u\k\y\q\p\m\a\e\g\e\z\6\o\x\f\c\w\9\j\h\o\8\f\5\r\h\3\1\q\4\4\o\2\s\p\j\x\r\y\u\d\g\y\q\9\p\9\e\4\c\g\4\i\5\9\0\7\l\u\5\t\8\h\z\w\s\9\d\h\o\n\i\s\t\z\k\p\7\6\p\g\s\w\p\m\v\0\e\q\m\t\b\q\m\2\n\2\7\1\b\c\z\z\1\e\4\x\2\m\p\k\z\p\a\e\v\5\i\d\m\b\9\h\7\p\o\o\r\g\s\7\q\z\v\x\m\v\2\h\j\6\2\k\o\a\j\g\9\9\f\e\y\h\v\y\3\h\o\o\n\v\b\3\k\7\8\2\1\8\k\r\8\e\y\a\q\n\n\p\b\m\h\l\a\5\u\n\8\n\9\3\t\j\d\v\c\4\c\3\9\e\8\n\u\v\8\n\f\8\v\6\o\g\l\1\e\x\8\z\l\2\s\s\6\3\8\0\i\o\p\9\f\v\v\y\p\4\v\s\v\4\j\g\0\f\c\d\2\5\f\c\a\r\l\8\8\s\o\k\h\u\l\y\g\z\f\g\8\b\f\d\8\1\4\v\2\2\3\x\w\9\m\z\6\1\q\e\7\2\2\l\3\8\t\n\m\k\x\e\0\z\m\h\k\4\a\d\8\8\l\y\0\b\z\3\c\4\r\j\8\z\h\q\r\z\m\e\m\x\j\c\3\w\3\o\y\v\o\j\9\v\p\6\r\j\x\q\8\a\u\9\g\k\u\d\4\o\k ]] 00:07:16.470 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.470 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:16.470 [2024-11-20 13:26:28.240990] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:16.470 [2024-11-20 13:26:28.241073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:07:16.470 [2024-11-20 13:26:28.389900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.728 [2024-11-20 13:26:28.476457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.729 [2024-11-20 13:26:28.543247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.729  [2024-11-20T13:26:28.944Z] Copying: 512/512 [B] (average 500 kBps) 00:07:16.987 00:07:16.987 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cs76hu384ntk540ivqzlnnum51xwxpi25ag0g64q997vryk0o3nc1s9sb3wpojtxd89qdwm65sceuolwwxpmgru81eoyhvbwjy59emq4diaxy8awmtizf2ml5o2jz6i8wtljaapx273mbe35gg2rfrbm3cukyqpmaegez6oxfcw9jho8f5rh31q44o2spjxryudgyq9p9e4cg4i5907lu5t8hzws9dhonistzkp76pgswpmv0eqmtbqm2n271bczz1e4x2mpkzpaev5idmb9h7poorgs7qzvxmv2hj62koajg99feyhvy3hoonvb3k78218kr8eyaqnnpbmhla5un8n93tjdvc4c39e8nuv8nf8v6ogl1ex8zl2ss6380iop9fvvyp4vsv4jg0fcd25fcarl88sokhulygzfg8bfd814v223xw9mz61qe722l38tnmkxe0zmhk4ad88ly0bz3c4rj8zhqrzmemxjc3w3oyvoj9vp6rjxq8au9gkud4ok == \c\s\7\6\h\u\3\8\4\n\t\k\5\4\0\i\v\q\z\l\n\n\u\m\5\1\x\w\x\p\i\2\5\a\g\0\g\6\4\q\9\9\7\v\r\y\k\0\o\3\n\c\1\s\9\s\b\3\w\p\o\j\t\x\d\8\9\q\d\w\m\6\5\s\c\e\u\o\l\w\w\x\p\m\g\r\u\8\1\e\o\y\h\v\b\w\j\y\5\9\e\m\q\4\d\i\a\x\y\8\a\w\m\t\i\z\f\2\m\l\5\o\2\j\z\6\i\8\w\t\l\j\a\a\p\x\2\7\3\m\b\e\3\5\g\g\2\r\f\r\b\m\3\c\u\k\y\q\p\m\a\e\g\e\z\6\o\x\f\c\w\9\j\h\o\8\f\5\r\h\3\1\q\4\4\o\2\s\p\j\x\r\y\u\d\g\y\q\9\p\9\e\4\c\g\4\i\5\9\0\7\l\u\5\t\8\h\z\w\s\9\d\h\o\n\i\s\t\z\k\p\7\6\p\g\s\w\p\m\v\0\e\q\m\t\b\q\m\2\n\2\7\1\b\c\z\z\1\e\4\x\2\m\p\k\z\p\a\e\v\5\i\d\m\b\9\h\7\p\o\o\r\g\s\7\q\z\v\x\m\v\2\h\j\6\2\k\o\a\j\g\9\9\f\e\y\h\v\y\3\h\o\o\n\v\b\3\k\7\8\2\1\8\k\r\8\e\y\a\q\n\n\p\b\m\h\l\a\5\u\n\8\n\9\3\t\j\d\v\c\4\c\3\9\e\8\n\u\v\8\n\f\8\v\6\o\g\l\1\e\x\8\z\l\2\s\s\6\3\8\0\i\o\p\9\f\v\v\y\p\4\v\s\v\4\j\g\0\f\c\d\2\5\f\c\a\r\l\8\8\s\o\k\h\u\l\y\g\z\f\g\8\b\f\d\8\1\4\v\2\2\3\x\w\9\m\z\6\1\q\e\7\2\2\l\3\8\t\n\m\k\x\e\0\z\m\h\k\4\a\d\8\8\l\y\0\b\z\3\c\4\r\j\8\z\h\q\r\z\m\e\m\x\j\c\3\w\3\o\y\v\o\j\9\v\p\6\r\j\x\q\8\a\u\9\g\k\u\d\4\o\k ]] 00:07:16.987 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.987 13:26:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:16.987 [2024-11-20 13:26:28.875970] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:16.987 [2024-11-20 13:26:28.876124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:07:17.246 [2024-11-20 13:26:29.031713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.246 [2024-11-20 13:26:29.102572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.246 [2024-11-20 13:26:29.165171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.503  [2024-11-20T13:26:29.460Z] Copying: 512/512 [B] (average 500 kBps) 00:07:17.503 00:07:17.504 13:26:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cs76hu384ntk540ivqzlnnum51xwxpi25ag0g64q997vryk0o3nc1s9sb3wpojtxd89qdwm65sceuolwwxpmgru81eoyhvbwjy59emq4diaxy8awmtizf2ml5o2jz6i8wtljaapx273mbe35gg2rfrbm3cukyqpmaegez6oxfcw9jho8f5rh31q44o2spjxryudgyq9p9e4cg4i5907lu5t8hzws9dhonistzkp76pgswpmv0eqmtbqm2n271bczz1e4x2mpkzpaev5idmb9h7poorgs7qzvxmv2hj62koajg99feyhvy3hoonvb3k78218kr8eyaqnnpbmhla5un8n93tjdvc4c39e8nuv8nf8v6ogl1ex8zl2ss6380iop9fvvyp4vsv4jg0fcd25fcarl88sokhulygzfg8bfd814v223xw9mz61qe722l38tnmkxe0zmhk4ad88ly0bz3c4rj8zhqrzmemxjc3w3oyvoj9vp6rjxq8au9gkud4ok == \c\s\7\6\h\u\3\8\4\n\t\k\5\4\0\i\v\q\z\l\n\n\u\m\5\1\x\w\x\p\i\2\5\a\g\0\g\6\4\q\9\9\7\v\r\y\k\0\o\3\n\c\1\s\9\s\b\3\w\p\o\j\t\x\d\8\9\q\d\w\m\6\5\s\c\e\u\o\l\w\w\x\p\m\g\r\u\8\1\e\o\y\h\v\b\w\j\y\5\9\e\m\q\4\d\i\a\x\y\8\a\w\m\t\i\z\f\2\m\l\5\o\2\j\z\6\i\8\w\t\l\j\a\a\p\x\2\7\3\m\b\e\3\5\g\g\2\r\f\r\b\m\3\c\u\k\y\q\p\m\a\e\g\e\z\6\o\x\f\c\w\9\j\h\o\8\f\5\r\h\3\1\q\4\4\o\2\s\p\j\x\r\y\u\d\g\y\q\9\p\9\e\4\c\g\4\i\5\9\0\7\l\u\5\t\8\h\z\w\s\9\d\h\o\n\i\s\t\z\k\p\7\6\p\g\s\w\p\m\v\0\e\q\m\t\b\q\m\2\n\2\7\1\b\c\z\z\1\e\4\x\2\m\p\k\z\p\a\e\v\5\i\d\m\b\9\h\7\p\o\o\r\g\s\7\q\z\v\x\m\v\2\h\j\6\2\k\o\a\j\g\9\9\f\e\y\h\v\y\3\h\o\o\n\v\b\3\k\7\8\2\1\8\k\r\8\e\y\a\q\n\n\p\b\m\h\l\a\5\u\n\8\n\9\3\t\j\d\v\c\4\c\3\9\e\8\n\u\v\8\n\f\8\v\6\o\g\l\1\e\x\8\z\l\2\s\s\6\3\8\0\i\o\p\9\f\v\v\y\p\4\v\s\v\4\j\g\0\f\c\d\2\5\f\c\a\r\l\8\8\s\o\k\h\u\l\y\g\z\f\g\8\b\f\d\8\1\4\v\2\2\3\x\w\9\m\z\6\1\q\e\7\2\2\l\3\8\t\n\m\k\x\e\0\z\m\h\k\4\a\d\8\8\l\y\0\b\z\3\c\4\r\j\8\z\h\q\r\z\m\e\m\x\j\c\3\w\3\o\y\v\o\j\9\v\p\6\r\j\x\q\8\a\u\9\g\k\u\d\4\o\k ]] 00:07:17.504 00:07:17.504 real 0m4.868s 00:07:17.504 user 0m2.667s 00:07:17.504 sys 0m1.225s 00:07:17.504 13:26:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.504 13:26:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.504 ************************************ 00:07:17.504 END TEST dd_flags_misc_forced_aio 00:07:17.504 ************************************ 00:07:17.762 13:26:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:17.762 13:26:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:17.762 13:26:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:17.762 00:07:17.762 real 0m21.456s 00:07:17.762 user 0m10.440s 00:07:17.762 sys 0m7.074s 00:07:17.762 13:26:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.762 13:26:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.762 ************************************ 00:07:17.762 END TEST spdk_dd_posix 00:07:17.762 ************************************ 00:07:17.762 13:26:29 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:17.762 13:26:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.762 13:26:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.762 13:26:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:17.762 ************************************ 00:07:17.762 START TEST spdk_dd_malloc 00:07:17.762 ************************************ 00:07:17.762 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:17.762 * Looking for test storage... 00:07:17.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.762 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.762 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.762 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.762 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.021 --rc genhtml_branch_coverage=1 00:07:18.021 --rc genhtml_function_coverage=1 00:07:18.021 --rc genhtml_legend=1 00:07:18.021 --rc geninfo_all_blocks=1 00:07:18.021 --rc geninfo_unexecuted_blocks=1 00:07:18.021 00:07:18.021 ' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.021 --rc genhtml_branch_coverage=1 00:07:18.021 --rc genhtml_function_coverage=1 00:07:18.021 --rc genhtml_legend=1 00:07:18.021 --rc geninfo_all_blocks=1 00:07:18.021 --rc geninfo_unexecuted_blocks=1 00:07:18.021 00:07:18.021 ' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.021 --rc genhtml_branch_coverage=1 00:07:18.021 --rc genhtml_function_coverage=1 00:07:18.021 --rc genhtml_legend=1 00:07:18.021 --rc geninfo_all_blocks=1 00:07:18.021 --rc geninfo_unexecuted_blocks=1 00:07:18.021 00:07:18.021 ' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.021 --rc genhtml_branch_coverage=1 00:07:18.021 --rc genhtml_function_coverage=1 00:07:18.021 --rc genhtml_legend=1 00:07:18.021 --rc geninfo_all_blocks=1 00:07:18.021 --rc geninfo_unexecuted_blocks=1 00:07:18.021 00:07:18.021 ' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:18.021 ************************************ 00:07:18.021 START TEST dd_malloc_copy 00:07:18.021 ************************************ 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:18.021 13:26:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.021 [2024-11-20 13:26:29.798126] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:18.021 [2024-11-20 13:26:29.798250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:07:18.021 { 00:07:18.021 "subsystems": [ 00:07:18.021 { 00:07:18.021 "subsystem": "bdev", 00:07:18.021 "config": [ 00:07:18.021 { 00:07:18.021 "params": { 00:07:18.021 "block_size": 512, 00:07:18.021 "num_blocks": 1048576, 00:07:18.021 "name": "malloc0" 00:07:18.021 }, 00:07:18.021 "method": "bdev_malloc_create" 00:07:18.021 }, 00:07:18.021 { 00:07:18.021 "params": { 00:07:18.021 "block_size": 512, 00:07:18.021 "num_blocks": 1048576, 00:07:18.021 "name": "malloc1" 00:07:18.021 }, 00:07:18.021 "method": "bdev_malloc_create" 00:07:18.021 }, 00:07:18.021 { 00:07:18.021 "method": "bdev_wait_for_examine" 00:07:18.021 } 00:07:18.021 ] 00:07:18.021 } 00:07:18.021 ] 00:07:18.021 } 00:07:18.021 [2024-11-20 13:26:29.962100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.281 [2024-11-20 13:26:30.086355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.281 [2024-11-20 13:26:30.150065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.658  [2024-11-20T13:26:32.552Z] Copying: 183/512 [MB] (183 MBps) [2024-11-20T13:26:33.489Z] Copying: 369/512 [MB] (185 MBps) [2024-11-20T13:26:34.057Z] Copying: 512/512 [MB] (average 185 MBps) 00:07:22.100 00:07:22.100 13:26:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:22.100 13:26:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:22.100 13:26:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:22.100 13:26:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.100 [2024-11-20 13:26:33.979573] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:22.100 [2024-11-20 13:26:33.979709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 00:07:22.100 { 00:07:22.100 "subsystems": [ 00:07:22.100 { 00:07:22.100 "subsystem": "bdev", 00:07:22.100 "config": [ 00:07:22.100 { 00:07:22.100 "params": { 00:07:22.100 "block_size": 512, 00:07:22.100 "num_blocks": 1048576, 00:07:22.100 "name": "malloc0" 00:07:22.100 }, 00:07:22.100 "method": "bdev_malloc_create" 00:07:22.100 }, 00:07:22.100 { 00:07:22.100 "params": { 00:07:22.100 "block_size": 512, 00:07:22.100 "num_blocks": 1048576, 00:07:22.100 "name": "malloc1" 00:07:22.100 }, 00:07:22.100 "method": "bdev_malloc_create" 00:07:22.100 }, 00:07:22.100 { 00:07:22.100 "method": "bdev_wait_for_examine" 00:07:22.100 } 00:07:22.100 ] 00:07:22.100 } 00:07:22.100 ] 00:07:22.100 } 00:07:22.360 [2024-11-20 13:26:34.128543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.360 [2024-11-20 13:26:34.191722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.360 [2024-11-20 13:26:34.250114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.733  [2024-11-20T13:26:36.626Z] Copying: 186/512 [MB] (186 MBps) [2024-11-20T13:26:37.561Z] Copying: 369/512 [MB] (182 MBps) [2024-11-20T13:26:38.127Z] Copying: 512/512 [MB] (average 185 MBps) 00:07:26.170 00:07:26.170 00:07:26.170 real 0m8.271s 00:07:26.170 user 0m7.215s 00:07:26.170 sys 0m0.905s 00:07:26.170 13:26:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.170 13:26:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.170 ************************************ 00:07:26.170 END TEST dd_malloc_copy 00:07:26.170 ************************************ 00:07:26.170 00:07:26.170 real 0m8.521s 00:07:26.170 user 0m7.356s 00:07:26.170 sys 0m1.021s 00:07:26.170 13:26:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.170 ************************************ 00:07:26.170 END TEST spdk_dd_malloc 00:07:26.170 13:26:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:26.170 ************************************ 00:07:26.170 13:26:38 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:26.170 13:26:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:26.170 13:26:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.170 13:26:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:26.170 ************************************ 00:07:26.170 START TEST spdk_dd_bdev_to_bdev 00:07:26.170 ************************************ 00:07:26.170 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:26.429 * Looking for test storage... 00:07:26.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.429 --rc genhtml_branch_coverage=1 00:07:26.429 --rc genhtml_function_coverage=1 00:07:26.429 --rc genhtml_legend=1 00:07:26.429 --rc geninfo_all_blocks=1 00:07:26.429 --rc geninfo_unexecuted_blocks=1 00:07:26.429 00:07:26.429 ' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.429 --rc genhtml_branch_coverage=1 00:07:26.429 --rc genhtml_function_coverage=1 00:07:26.429 --rc genhtml_legend=1 00:07:26.429 --rc geninfo_all_blocks=1 00:07:26.429 --rc geninfo_unexecuted_blocks=1 00:07:26.429 00:07:26.429 ' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.429 --rc genhtml_branch_coverage=1 00:07:26.429 --rc genhtml_function_coverage=1 00:07:26.429 --rc genhtml_legend=1 00:07:26.429 --rc geninfo_all_blocks=1 00:07:26.429 --rc geninfo_unexecuted_blocks=1 00:07:26.429 00:07:26.429 ' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.429 --rc genhtml_branch_coverage=1 00:07:26.429 --rc genhtml_function_coverage=1 00:07:26.429 --rc genhtml_legend=1 00:07:26.429 --rc geninfo_all_blocks=1 00:07:26.429 --rc geninfo_unexecuted_blocks=1 00:07:26.429 00:07:26.429 ' 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.429 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.430 ************************************ 00:07:26.430 START TEST dd_inflate_file 00:07:26.430 ************************************ 00:07:26.430 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:26.430 [2024-11-20 13:26:38.369376] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:26.430 [2024-11-20 13:26:38.369477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60989 ] 00:07:26.688 [2024-11-20 13:26:38.516086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.688 [2024-11-20 13:26:38.583811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.947 [2024-11-20 13:26:38.646736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.947  [2024-11-20T13:26:39.164Z] Copying: 64/64 [MB] (average 1422 MBps) 00:07:27.207 00:07:27.207 00:07:27.207 real 0m0.617s 00:07:27.207 user 0m0.352s 00:07:27.207 sys 0m0.332s 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.207 ************************************ 00:07:27.207 END TEST dd_inflate_file 00:07:27.207 ************************************ 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.207 ************************************ 00:07:27.207 START TEST dd_copy_to_out_bdev 00:07:27.207 ************************************ 00:07:27.207 13:26:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:27.207 { 00:07:27.207 "subsystems": [ 00:07:27.207 { 00:07:27.207 "subsystem": "bdev", 00:07:27.207 "config": [ 00:07:27.207 { 00:07:27.207 "params": { 00:07:27.207 "trtype": "pcie", 00:07:27.207 "traddr": "0000:00:10.0", 00:07:27.207 "name": "Nvme0" 00:07:27.207 }, 00:07:27.207 "method": "bdev_nvme_attach_controller" 00:07:27.207 }, 00:07:27.207 { 00:07:27.207 "params": { 00:07:27.207 "trtype": "pcie", 00:07:27.207 "traddr": "0000:00:11.0", 00:07:27.207 "name": "Nvme1" 00:07:27.207 }, 00:07:27.207 "method": "bdev_nvme_attach_controller" 00:07:27.207 }, 00:07:27.207 { 00:07:27.207 "method": "bdev_wait_for_examine" 00:07:27.207 } 00:07:27.207 ] 00:07:27.208 } 00:07:27.208 ] 00:07:27.208 } 00:07:27.208 [2024-11-20 13:26:39.041774] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:27.208 [2024-11-20 13:26:39.041881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:07:27.499 [2024-11-20 13:26:39.191374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.499 [2024-11-20 13:26:39.258585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.499 [2024-11-20 13:26:39.319927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.876  [2024-11-20T13:26:40.833Z] Copying: 57/64 [MB] (57 MBps) [2024-11-20T13:26:41.092Z] Copying: 64/64 [MB] (average 57 MBps) 00:07:29.135 00:07:29.135 00:07:29.135 real 0m1.897s 00:07:29.135 user 0m1.649s 00:07:29.135 sys 0m1.501s 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.135 ************************************ 00:07:29.135 END TEST dd_copy_to_out_bdev 00:07:29.135 ************************************ 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.135 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.135 ************************************ 00:07:29.135 START TEST dd_offset_magic 00:07:29.136 ************************************ 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:29.136 13:26:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:29.136 [2024-11-20 13:26:40.990573] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:29.136 [2024-11-20 13:26:40.990674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61073 ] 00:07:29.136 { 00:07:29.136 "subsystems": [ 00:07:29.136 { 00:07:29.136 "subsystem": "bdev", 00:07:29.136 "config": [ 00:07:29.136 { 00:07:29.136 "params": { 00:07:29.136 "trtype": "pcie", 00:07:29.136 "traddr": "0000:00:10.0", 00:07:29.136 "name": "Nvme0" 00:07:29.136 }, 00:07:29.136 "method": "bdev_nvme_attach_controller" 00:07:29.136 }, 00:07:29.136 { 00:07:29.136 "params": { 00:07:29.136 "trtype": "pcie", 00:07:29.136 "traddr": "0000:00:11.0", 00:07:29.136 "name": "Nvme1" 00:07:29.136 }, 00:07:29.136 "method": "bdev_nvme_attach_controller" 00:07:29.136 }, 00:07:29.136 { 00:07:29.136 "method": "bdev_wait_for_examine" 00:07:29.136 } 00:07:29.136 ] 00:07:29.136 } 00:07:29.136 ] 00:07:29.136 } 00:07:29.395 [2024-11-20 13:26:41.138183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.395 [2024-11-20 13:26:41.199968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.395 [2024-11-20 13:26:41.266098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.653  [2024-11-20T13:26:41.869Z] Copying: 65/65 [MB] (average 955 MBps) 00:07:29.912 00:07:29.912 13:26:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:29.912 13:26:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:29.912 13:26:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:29.912 13:26:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:29.912 [2024-11-20 13:26:41.820562] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:29.912 [2024-11-20 13:26:41.820643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61093 ] 00:07:29.912 { 00:07:29.912 "subsystems": [ 00:07:29.912 { 00:07:29.912 "subsystem": "bdev", 00:07:29.912 "config": [ 00:07:29.912 { 00:07:29.912 "params": { 00:07:29.912 "trtype": "pcie", 00:07:29.912 "traddr": "0000:00:10.0", 00:07:29.912 "name": "Nvme0" 00:07:29.912 }, 00:07:29.912 "method": "bdev_nvme_attach_controller" 00:07:29.912 }, 00:07:29.912 { 00:07:29.912 "params": { 00:07:29.912 "trtype": "pcie", 00:07:29.912 "traddr": "0000:00:11.0", 00:07:29.912 "name": "Nvme1" 00:07:29.912 }, 00:07:29.912 "method": "bdev_nvme_attach_controller" 00:07:29.912 }, 00:07:29.912 { 00:07:29.912 "method": "bdev_wait_for_examine" 00:07:29.912 } 00:07:29.912 ] 00:07:29.912 } 00:07:29.912 ] 00:07:29.912 } 00:07:30.172 [2024-11-20 13:26:41.967408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.172 [2024-11-20 13:26:42.035912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.172 [2024-11-20 13:26:42.099179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.431  [2024-11-20T13:26:42.647Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:30.690 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:30.690 13:26:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:30.690 [2024-11-20 13:26:42.597784] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:30.690 [2024-11-20 13:26:42.597880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61104 ] 00:07:30.690 { 00:07:30.690 "subsystems": [ 00:07:30.690 { 00:07:30.690 "subsystem": "bdev", 00:07:30.690 "config": [ 00:07:30.690 { 00:07:30.690 "params": { 00:07:30.690 "trtype": "pcie", 00:07:30.690 "traddr": "0000:00:10.0", 00:07:30.690 "name": "Nvme0" 00:07:30.690 }, 00:07:30.690 "method": "bdev_nvme_attach_controller" 00:07:30.690 }, 00:07:30.690 { 00:07:30.690 "params": { 00:07:30.690 "trtype": "pcie", 00:07:30.690 "traddr": "0000:00:11.0", 00:07:30.690 "name": "Nvme1" 00:07:30.690 }, 00:07:30.690 "method": "bdev_nvme_attach_controller" 00:07:30.690 }, 00:07:30.690 { 00:07:30.690 "method": "bdev_wait_for_examine" 00:07:30.690 } 00:07:30.690 ] 00:07:30.690 } 00:07:30.690 ] 00:07:30.690 } 00:07:30.950 [2024-11-20 13:26:42.750212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.950 [2024-11-20 13:26:42.819780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.950 [2024-11-20 13:26:42.879788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.208  [2024-11-20T13:26:43.424Z] Copying: 65/65 [MB] (average 984 MBps) 00:07:31.467 00:07:31.467 13:26:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:31.467 13:26:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:31.467 13:26:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:31.467 13:26:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:31.725 [2024-11-20 13:26:43.443605] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:31.725 [2024-11-20 13:26:43.443717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61124 ] 00:07:31.725 { 00:07:31.725 "subsystems": [ 00:07:31.725 { 00:07:31.725 "subsystem": "bdev", 00:07:31.725 "config": [ 00:07:31.725 { 00:07:31.725 "params": { 00:07:31.725 "trtype": "pcie", 00:07:31.725 "traddr": "0000:00:10.0", 00:07:31.726 "name": "Nvme0" 00:07:31.726 }, 00:07:31.726 "method": "bdev_nvme_attach_controller" 00:07:31.726 }, 00:07:31.726 { 00:07:31.726 "params": { 00:07:31.726 "trtype": "pcie", 00:07:31.726 "traddr": "0000:00:11.0", 00:07:31.726 "name": "Nvme1" 00:07:31.726 }, 00:07:31.726 "method": "bdev_nvme_attach_controller" 00:07:31.726 }, 00:07:31.726 { 00:07:31.726 "method": "bdev_wait_for_examine" 00:07:31.726 } 00:07:31.726 ] 00:07:31.726 } 00:07:31.726 ] 00:07:31.726 } 00:07:31.726 [2024-11-20 13:26:43.592330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.726 [2024-11-20 13:26:43.662567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.984 [2024-11-20 13:26:43.724667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.984  [2024-11-20T13:26:44.199Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:32.242 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:32.243 00:07:32.243 real 0m3.190s 00:07:32.243 user 0m2.293s 00:07:32.243 sys 0m1.022s 00:07:32.243 ************************************ 00:07:32.243 END TEST dd_offset_magic 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:32.243 ************************************ 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:32.243 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:32.514 [2024-11-20 13:26:44.215551] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:32.514 [2024-11-20 13:26:44.215651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:07:32.514 { 00:07:32.514 "subsystems": [ 00:07:32.514 { 00:07:32.514 "subsystem": "bdev", 00:07:32.514 "config": [ 00:07:32.514 { 00:07:32.514 "params": { 00:07:32.514 "trtype": "pcie", 00:07:32.514 "traddr": "0000:00:10.0", 00:07:32.514 "name": "Nvme0" 00:07:32.514 }, 00:07:32.514 "method": "bdev_nvme_attach_controller" 00:07:32.514 }, 00:07:32.514 { 00:07:32.514 "params": { 00:07:32.514 "trtype": "pcie", 00:07:32.514 "traddr": "0000:00:11.0", 00:07:32.514 "name": "Nvme1" 00:07:32.514 }, 00:07:32.514 "method": "bdev_nvme_attach_controller" 00:07:32.514 }, 00:07:32.514 { 00:07:32.514 "method": "bdev_wait_for_examine" 00:07:32.514 } 00:07:32.514 ] 00:07:32.514 } 00:07:32.514 ] 00:07:32.514 } 00:07:32.514 [2024-11-20 13:26:44.368642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.514 [2024-11-20 13:26:44.439861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.774 [2024-11-20 13:26:44.500973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.774  [2024-11-20T13:26:44.989Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:33.032 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:33.032 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:33.033 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:33.033 13:26:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:33.033 { 00:07:33.033 "subsystems": [ 00:07:33.033 { 00:07:33.033 "subsystem": "bdev", 00:07:33.033 "config": [ 00:07:33.033 { 00:07:33.033 "params": { 00:07:33.033 "trtype": "pcie", 00:07:33.033 "traddr": "0000:00:10.0", 00:07:33.033 "name": "Nvme0" 00:07:33.033 }, 00:07:33.033 "method": "bdev_nvme_attach_controller" 00:07:33.033 }, 00:07:33.033 { 00:07:33.033 "params": { 00:07:33.033 "trtype": "pcie", 00:07:33.033 "traddr": "0000:00:11.0", 00:07:33.033 "name": "Nvme1" 00:07:33.033 }, 00:07:33.033 "method": "bdev_nvme_attach_controller" 00:07:33.033 }, 00:07:33.033 { 00:07:33.033 "method": "bdev_wait_for_examine" 00:07:33.033 } 00:07:33.033 ] 00:07:33.033 } 00:07:33.033 ] 00:07:33.033 } 00:07:33.033 [2024-11-20 13:26:44.964303] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:33.033 [2024-11-20 13:26:44.964399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61182 ] 00:07:33.291 [2024-11-20 13:26:45.114322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.291 [2024-11-20 13:26:45.181289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.291 [2024-11-20 13:26:45.243029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.550  [2024-11-20T13:26:45.764Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:33.807 00:07:33.807 13:26:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:33.807 00:07:33.807 real 0m7.572s 00:07:33.807 user 0m5.516s 00:07:33.807 sys 0m3.625s 00:07:33.807 13:26:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.807 ************************************ 00:07:33.807 END TEST spdk_dd_bdev_to_bdev 00:07:33.807 13:26:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:33.808 ************************************ 00:07:33.808 13:26:45 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:33.808 13:26:45 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:33.808 13:26:45 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.808 13:26:45 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.808 13:26:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:33.808 ************************************ 00:07:33.808 START TEST spdk_dd_uring 00:07:33.808 ************************************ 00:07:33.808 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:34.067 * Looking for test storage... 00:07:34.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.067 --rc genhtml_branch_coverage=1 00:07:34.067 --rc genhtml_function_coverage=1 00:07:34.067 --rc genhtml_legend=1 00:07:34.067 --rc geninfo_all_blocks=1 00:07:34.067 --rc geninfo_unexecuted_blocks=1 00:07:34.067 00:07:34.067 ' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.067 --rc genhtml_branch_coverage=1 00:07:34.067 --rc genhtml_function_coverage=1 00:07:34.067 --rc genhtml_legend=1 00:07:34.067 --rc geninfo_all_blocks=1 00:07:34.067 --rc geninfo_unexecuted_blocks=1 00:07:34.067 00:07:34.067 ' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.067 --rc genhtml_branch_coverage=1 00:07:34.067 --rc genhtml_function_coverage=1 00:07:34.067 --rc genhtml_legend=1 00:07:34.067 --rc geninfo_all_blocks=1 00:07:34.067 --rc geninfo_unexecuted_blocks=1 00:07:34.067 00:07:34.067 ' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.067 --rc genhtml_branch_coverage=1 00:07:34.067 --rc genhtml_function_coverage=1 00:07:34.067 --rc genhtml_legend=1 00:07:34.067 --rc geninfo_all_blocks=1 00:07:34.067 --rc geninfo_unexecuted_blocks=1 00:07:34.067 00:07:34.067 ' 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.067 13:26:45 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 ************************************ 00:07:34.068 START TEST dd_uring_copy 00:07:34.068 ************************************ 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=5kbqikac9n3ddxwqhgq6lwy2nww033i6osej3ikdzl9frwdzzjr4vn9l5cjkfj1tqwjmltiqa2ycxhp5ihj0o5ndjt4977wr1po5jwvkl55d3wt5v1wqkj745blt71c958p0hsnb94r17adldw6j29uw9yuvy5urrgmgzliju98kgci01518hi4jzwqq5m6jacbya1sqygisrlomwsuti65z2uxa8fqlducylzruothcv513n68jgm37xa4z4jlaxx0vuum26lhaxbhk7bkoaik9isrmzxiqr53ihmnq59xq667wrrdoq6cvrzod0r2p76w27eddo50tvdomdsl4lt2xmgnh99p2doz3bys1lfaqjvp0yb2yk4s4lvoclt3uiwzzyrj6j5lull0xq3avvv0o14sz5zj87utkzze50b743a4u4eivq7uog4l8wuc2kh0u762n40s5olr9ymw9reihyjaevb9y7nzshjwxo3japywuwqbtu8dtiz808pjmq9bicobupptfaypjfr0oqnvdvixc03icvz06lewf3cuzzt2hpm5faxwdn0u0iam7o0f6xows6dwydkyzr75imkfm71c7saomd39nxx30enmeucbzoinjyza1hkz2abrb4cl5te8mvh59um8f9dmgnzy3dqjc81yokzabhntyvkqfao5crv0fadpxu4i66hcu3ubv9fnagpqg6l9kvnmq4ooymkw63dp1hl2crnsi21rg3gaquuflwrtdkbhhevc1em1iyb225w7nk3n5k9m0bi718i91ty1znz5x7mis3zd8vjma4xopu5l76iba5xxb6af6jaq160q6l9uykyfohdydo6rt0mdr1ze02c8k4rmdtdxeh9ca8x39x87pw8apxzpa5xs37l5fo10q2ll6c80tk5hdmrfvjjaw3lfo9i2nqjtg1d1wauajen7dte3ku8rjq5qwsgq1xy27rm4m2wcciv2ncbvycqg1pqv8txgwd4uj4zx49wtxdj6pj2we 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 5kbqikac9n3ddxwqhgq6lwy2nww033i6osej3ikdzl9frwdzzjr4vn9l5cjkfj1tqwjmltiqa2ycxhp5ihj0o5ndjt4977wr1po5jwvkl55d3wt5v1wqkj745blt71c958p0hsnb94r17adldw6j29uw9yuvy5urrgmgzliju98kgci01518hi4jzwqq5m6jacbya1sqygisrlomwsuti65z2uxa8fqlducylzruothcv513n68jgm37xa4z4jlaxx0vuum26lhaxbhk7bkoaik9isrmzxiqr53ihmnq59xq667wrrdoq6cvrzod0r2p76w27eddo50tvdomdsl4lt2xmgnh99p2doz3bys1lfaqjvp0yb2yk4s4lvoclt3uiwzzyrj6j5lull0xq3avvv0o14sz5zj87utkzze50b743a4u4eivq7uog4l8wuc2kh0u762n40s5olr9ymw9reihyjaevb9y7nzshjwxo3japywuwqbtu8dtiz808pjmq9bicobupptfaypjfr0oqnvdvixc03icvz06lewf3cuzzt2hpm5faxwdn0u0iam7o0f6xows6dwydkyzr75imkfm71c7saomd39nxx30enmeucbzoinjyza1hkz2abrb4cl5te8mvh59um8f9dmgnzy3dqjc81yokzabhntyvkqfao5crv0fadpxu4i66hcu3ubv9fnagpqg6l9kvnmq4ooymkw63dp1hl2crnsi21rg3gaquuflwrtdkbhhevc1em1iyb225w7nk3n5k9m0bi718i91ty1znz5x7mis3zd8vjma4xopu5l76iba5xxb6af6jaq160q6l9uykyfohdydo6rt0mdr1ze02c8k4rmdtdxeh9ca8x39x87pw8apxzpa5xs37l5fo10q2ll6c80tk5hdmrfvjjaw3lfo9i2nqjtg1d1wauajen7dte3ku8rjq5qwsgq1xy27rm4m2wcciv2ncbvycqg1pqv8txgwd4uj4zx49wtxdj6pj2we 00:07:34.068 13:26:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:34.068 [2024-11-20 13:26:45.997649] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:34.068 [2024-11-20 13:26:45.997799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:07:34.327 [2024-11-20 13:26:46.141850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.327 [2024-11-20 13:26:46.207664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.327 [2024-11-20 13:26:46.268975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.260  [2024-11-20T13:26:47.475Z] Copying: 511/511 [MB] (average 973 MBps) 00:07:35.518 00:07:35.518 13:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:35.518 13:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:35.518 13:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.518 13:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 [2024-11-20 13:26:47.519329] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:35.777 [2024-11-20 13:26:47.519433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61276 ] 00:07:35.777 { 00:07:35.777 "subsystems": [ 00:07:35.777 { 00:07:35.777 "subsystem": "bdev", 00:07:35.777 "config": [ 00:07:35.777 { 00:07:35.777 "params": { 00:07:35.777 "block_size": 512, 00:07:35.777 "num_blocks": 1048576, 00:07:35.777 "name": "malloc0" 00:07:35.777 }, 00:07:35.777 "method": "bdev_malloc_create" 00:07:35.777 }, 00:07:35.777 { 00:07:35.777 "params": { 00:07:35.777 "filename": "/dev/zram1", 00:07:35.777 "name": "uring0" 00:07:35.777 }, 00:07:35.777 "method": "bdev_uring_create" 00:07:35.777 }, 00:07:35.777 { 00:07:35.777 "method": "bdev_wait_for_examine" 00:07:35.777 } 00:07:35.777 ] 00:07:35.777 } 00:07:35.777 ] 00:07:35.777 } 00:07:35.777 [2024-11-20 13:26:47.663525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.777 [2024-11-20 13:26:47.731309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.035 [2024-11-20 13:26:47.796225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.415  [2024-11-20T13:26:50.306Z] Copying: 222/512 [MB] (222 MBps) [2024-11-20T13:26:50.564Z] Copying: 446/512 [MB] (224 MBps) [2024-11-20T13:26:50.822Z] Copying: 512/512 [MB] (average 223 MBps) 00:07:38.865 00:07:38.865 13:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:38.865 13:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:38.865 13:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:38.865 13:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 { 00:07:38.865 "subsystems": [ 00:07:38.865 { 00:07:38.865 "subsystem": "bdev", 00:07:38.865 "config": [ 00:07:38.865 { 00:07:38.865 "params": { 00:07:38.865 "block_size": 512, 00:07:38.865 "num_blocks": 1048576, 00:07:38.865 "name": "malloc0" 00:07:38.865 }, 00:07:38.865 "method": "bdev_malloc_create" 00:07:38.865 }, 00:07:38.865 { 00:07:38.865 "params": { 00:07:38.865 "filename": "/dev/zram1", 00:07:38.865 "name": "uring0" 00:07:38.865 }, 00:07:38.865 "method": "bdev_uring_create" 00:07:38.865 }, 00:07:38.865 { 00:07:38.865 "method": "bdev_wait_for_examine" 00:07:38.865 } 00:07:38.865 ] 00:07:38.865 } 00:07:38.865 ] 00:07:38.865 } 00:07:38.865 [2024-11-20 13:26:50.763399] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:38.865 [2024-11-20 13:26:50.763504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61320 ] 00:07:39.123 [2024-11-20 13:26:50.916880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.123 [2024-11-20 13:26:50.980009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.123 [2024-11-20 13:26:51.034955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.497  [2024-11-20T13:26:53.386Z] Copying: 168/512 [MB] (168 MBps) [2024-11-20T13:26:54.326Z] Copying: 353/512 [MB] (184 MBps) [2024-11-20T13:26:54.892Z] Copying: 512/512 [MB] (average 170 MBps) 00:07:42.935 00:07:42.935 13:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:42.935 13:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 5kbqikac9n3ddxwqhgq6lwy2nww033i6osej3ikdzl9frwdzzjr4vn9l5cjkfj1tqwjmltiqa2ycxhp5ihj0o5ndjt4977wr1po5jwvkl55d3wt5v1wqkj745blt71c958p0hsnb94r17adldw6j29uw9yuvy5urrgmgzliju98kgci01518hi4jzwqq5m6jacbya1sqygisrlomwsuti65z2uxa8fqlducylzruothcv513n68jgm37xa4z4jlaxx0vuum26lhaxbhk7bkoaik9isrmzxiqr53ihmnq59xq667wrrdoq6cvrzod0r2p76w27eddo50tvdomdsl4lt2xmgnh99p2doz3bys1lfaqjvp0yb2yk4s4lvoclt3uiwzzyrj6j5lull0xq3avvv0o14sz5zj87utkzze50b743a4u4eivq7uog4l8wuc2kh0u762n40s5olr9ymw9reihyjaevb9y7nzshjwxo3japywuwqbtu8dtiz808pjmq9bicobupptfaypjfr0oqnvdvixc03icvz06lewf3cuzzt2hpm5faxwdn0u0iam7o0f6xows6dwydkyzr75imkfm71c7saomd39nxx30enmeucbzoinjyza1hkz2abrb4cl5te8mvh59um8f9dmgnzy3dqjc81yokzabhntyvkqfao5crv0fadpxu4i66hcu3ubv9fnagpqg6l9kvnmq4ooymkw63dp1hl2crnsi21rg3gaquuflwrtdkbhhevc1em1iyb225w7nk3n5k9m0bi718i91ty1znz5x7mis3zd8vjma4xopu5l76iba5xxb6af6jaq160q6l9uykyfohdydo6rt0mdr1ze02c8k4rmdtdxeh9ca8x39x87pw8apxzpa5xs37l5fo10q2ll6c80tk5hdmrfvjjaw3lfo9i2nqjtg1d1wauajen7dte3ku8rjq5qwsgq1xy27rm4m2wcciv2ncbvycqg1pqv8txgwd4uj4zx49wtxdj6pj2we == \5\k\b\q\i\k\a\c\9\n\3\d\d\x\w\q\h\g\q\6\l\w\y\2\n\w\w\0\3\3\i\6\o\s\e\j\3\i\k\d\z\l\9\f\r\w\d\z\z\j\r\4\v\n\9\l\5\c\j\k\f\j\1\t\q\w\j\m\l\t\i\q\a\2\y\c\x\h\p\5\i\h\j\0\o\5\n\d\j\t\4\9\7\7\w\r\1\p\o\5\j\w\v\k\l\5\5\d\3\w\t\5\v\1\w\q\k\j\7\4\5\b\l\t\7\1\c\9\5\8\p\0\h\s\n\b\9\4\r\1\7\a\d\l\d\w\6\j\2\9\u\w\9\y\u\v\y\5\u\r\r\g\m\g\z\l\i\j\u\9\8\k\g\c\i\0\1\5\1\8\h\i\4\j\z\w\q\q\5\m\6\j\a\c\b\y\a\1\s\q\y\g\i\s\r\l\o\m\w\s\u\t\i\6\5\z\2\u\x\a\8\f\q\l\d\u\c\y\l\z\r\u\o\t\h\c\v\5\1\3\n\6\8\j\g\m\3\7\x\a\4\z\4\j\l\a\x\x\0\v\u\u\m\2\6\l\h\a\x\b\h\k\7\b\k\o\a\i\k\9\i\s\r\m\z\x\i\q\r\5\3\i\h\m\n\q\5\9\x\q\6\6\7\w\r\r\d\o\q\6\c\v\r\z\o\d\0\r\2\p\7\6\w\2\7\e\d\d\o\5\0\t\v\d\o\m\d\s\l\4\l\t\2\x\m\g\n\h\9\9\p\2\d\o\z\3\b\y\s\1\l\f\a\q\j\v\p\0\y\b\2\y\k\4\s\4\l\v\o\c\l\t\3\u\i\w\z\z\y\r\j\6\j\5\l\u\l\l\0\x\q\3\a\v\v\v\0\o\1\4\s\z\5\z\j\8\7\u\t\k\z\z\e\5\0\b\7\4\3\a\4\u\4\e\i\v\q\7\u\o\g\4\l\8\w\u\c\2\k\h\0\u\7\6\2\n\4\0\s\5\o\l\r\9\y\m\w\9\r\e\i\h\y\j\a\e\v\b\9\y\7\n\z\s\h\j\w\x\o\3\j\a\p\y\w\u\w\q\b\t\u\8\d\t\i\z\8\0\8\p\j\m\q\9\b\i\c\o\b\u\p\p\t\f\a\y\p\j\f\r\0\o\q\n\v\d\v\i\x\c\0\3\i\c\v\z\0\6\l\e\w\f\3\c\u\z\z\t\2\h\p\m\5\f\a\x\w\d\n\0\u\0\i\a\m\7\o\0\f\6\x\o\w\s\6\d\w\y\d\k\y\z\r\7\5\i\m\k\f\m\7\1\c\7\s\a\o\m\d\3\9\n\x\x\3\0\e\n\m\e\u\c\b\z\o\i\n\j\y\z\a\1\h\k\z\2\a\b\r\b\4\c\l\5\t\e\8\m\v\h\5\9\u\m\8\f\9\d\m\g\n\z\y\3\d\q\j\c\8\1\y\o\k\z\a\b\h\n\t\y\v\k\q\f\a\o\5\c\r\v\0\f\a\d\p\x\u\4\i\6\6\h\c\u\3\u\b\v\9\f\n\a\g\p\q\g\6\l\9\k\v\n\m\q\4\o\o\y\m\k\w\6\3\d\p\1\h\l\2\c\r\n\s\i\2\1\r\g\3\g\a\q\u\u\f\l\w\r\t\d\k\b\h\h\e\v\c\1\e\m\1\i\y\b\2\2\5\w\7\n\k\3\n\5\k\9\m\0\b\i\7\1\8\i\9\1\t\y\1\z\n\z\5\x\7\m\i\s\3\z\d\8\v\j\m\a\4\x\o\p\u\5\l\7\6\i\b\a\5\x\x\b\6\a\f\6\j\a\q\1\6\0\q\6\l\9\u\y\k\y\f\o\h\d\y\d\o\6\r\t\0\m\d\r\1\z\e\0\2\c\8\k\4\r\m\d\t\d\x\e\h\9\c\a\8\x\3\9\x\8\7\p\w\8\a\p\x\z\p\a\5\x\s\3\7\l\5\f\o\1\0\q\2\l\l\6\c\8\0\t\k\5\h\d\m\r\f\v\j\j\a\w\3\l\f\o\9\i\2\n\q\j\t\g\1\d\1\w\a\u\a\j\e\n\7\d\t\e\3\k\u\8\r\j\q\5\q\w\s\g\q\1\x\y\2\7\r\m\4\m\2\w\c\c\i\v\2\n\c\b\v\y\c\q\g\1\p\q\v\8\t\x\g\w\d\4\u\j\4\z\x\4\9\w\t\x\d\j\6\p\j\2\w\e ]] 00:07:42.935 13:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:42.936 13:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 5kbqikac9n3ddxwqhgq6lwy2nww033i6osej3ikdzl9frwdzzjr4vn9l5cjkfj1tqwjmltiqa2ycxhp5ihj0o5ndjt4977wr1po5jwvkl55d3wt5v1wqkj745blt71c958p0hsnb94r17adldw6j29uw9yuvy5urrgmgzliju98kgci01518hi4jzwqq5m6jacbya1sqygisrlomwsuti65z2uxa8fqlducylzruothcv513n68jgm37xa4z4jlaxx0vuum26lhaxbhk7bkoaik9isrmzxiqr53ihmnq59xq667wrrdoq6cvrzod0r2p76w27eddo50tvdomdsl4lt2xmgnh99p2doz3bys1lfaqjvp0yb2yk4s4lvoclt3uiwzzyrj6j5lull0xq3avvv0o14sz5zj87utkzze50b743a4u4eivq7uog4l8wuc2kh0u762n40s5olr9ymw9reihyjaevb9y7nzshjwxo3japywuwqbtu8dtiz808pjmq9bicobupptfaypjfr0oqnvdvixc03icvz06lewf3cuzzt2hpm5faxwdn0u0iam7o0f6xows6dwydkyzr75imkfm71c7saomd39nxx30enmeucbzoinjyza1hkz2abrb4cl5te8mvh59um8f9dmgnzy3dqjc81yokzabhntyvkqfao5crv0fadpxu4i66hcu3ubv9fnagpqg6l9kvnmq4ooymkw63dp1hl2crnsi21rg3gaquuflwrtdkbhhevc1em1iyb225w7nk3n5k9m0bi718i91ty1znz5x7mis3zd8vjma4xopu5l76iba5xxb6af6jaq160q6l9uykyfohdydo6rt0mdr1ze02c8k4rmdtdxeh9ca8x39x87pw8apxzpa5xs37l5fo10q2ll6c80tk5hdmrfvjjaw3lfo9i2nqjtg1d1wauajen7dte3ku8rjq5qwsgq1xy27rm4m2wcciv2ncbvycqg1pqv8txgwd4uj4zx49wtxdj6pj2we == \5\k\b\q\i\k\a\c\9\n\3\d\d\x\w\q\h\g\q\6\l\w\y\2\n\w\w\0\3\3\i\6\o\s\e\j\3\i\k\d\z\l\9\f\r\w\d\z\z\j\r\4\v\n\9\l\5\c\j\k\f\j\1\t\q\w\j\m\l\t\i\q\a\2\y\c\x\h\p\5\i\h\j\0\o\5\n\d\j\t\4\9\7\7\w\r\1\p\o\5\j\w\v\k\l\5\5\d\3\w\t\5\v\1\w\q\k\j\7\4\5\b\l\t\7\1\c\9\5\8\p\0\h\s\n\b\9\4\r\1\7\a\d\l\d\w\6\j\2\9\u\w\9\y\u\v\y\5\u\r\r\g\m\g\z\l\i\j\u\9\8\k\g\c\i\0\1\5\1\8\h\i\4\j\z\w\q\q\5\m\6\j\a\c\b\y\a\1\s\q\y\g\i\s\r\l\o\m\w\s\u\t\i\6\5\z\2\u\x\a\8\f\q\l\d\u\c\y\l\z\r\u\o\t\h\c\v\5\1\3\n\6\8\j\g\m\3\7\x\a\4\z\4\j\l\a\x\x\0\v\u\u\m\2\6\l\h\a\x\b\h\k\7\b\k\o\a\i\k\9\i\s\r\m\z\x\i\q\r\5\3\i\h\m\n\q\5\9\x\q\6\6\7\w\r\r\d\o\q\6\c\v\r\z\o\d\0\r\2\p\7\6\w\2\7\e\d\d\o\5\0\t\v\d\o\m\d\s\l\4\l\t\2\x\m\g\n\h\9\9\p\2\d\o\z\3\b\y\s\1\l\f\a\q\j\v\p\0\y\b\2\y\k\4\s\4\l\v\o\c\l\t\3\u\i\w\z\z\y\r\j\6\j\5\l\u\l\l\0\x\q\3\a\v\v\v\0\o\1\4\s\z\5\z\j\8\7\u\t\k\z\z\e\5\0\b\7\4\3\a\4\u\4\e\i\v\q\7\u\o\g\4\l\8\w\u\c\2\k\h\0\u\7\6\2\n\4\0\s\5\o\l\r\9\y\m\w\9\r\e\i\h\y\j\a\e\v\b\9\y\7\n\z\s\h\j\w\x\o\3\j\a\p\y\w\u\w\q\b\t\u\8\d\t\i\z\8\0\8\p\j\m\q\9\b\i\c\o\b\u\p\p\t\f\a\y\p\j\f\r\0\o\q\n\v\d\v\i\x\c\0\3\i\c\v\z\0\6\l\e\w\f\3\c\u\z\z\t\2\h\p\m\5\f\a\x\w\d\n\0\u\0\i\a\m\7\o\0\f\6\x\o\w\s\6\d\w\y\d\k\y\z\r\7\5\i\m\k\f\m\7\1\c\7\s\a\o\m\d\3\9\n\x\x\3\0\e\n\m\e\u\c\b\z\o\i\n\j\y\z\a\1\h\k\z\2\a\b\r\b\4\c\l\5\t\e\8\m\v\h\5\9\u\m\8\f\9\d\m\g\n\z\y\3\d\q\j\c\8\1\y\o\k\z\a\b\h\n\t\y\v\k\q\f\a\o\5\c\r\v\0\f\a\d\p\x\u\4\i\6\6\h\c\u\3\u\b\v\9\f\n\a\g\p\q\g\6\l\9\k\v\n\m\q\4\o\o\y\m\k\w\6\3\d\p\1\h\l\2\c\r\n\s\i\2\1\r\g\3\g\a\q\u\u\f\l\w\r\t\d\k\b\h\h\e\v\c\1\e\m\1\i\y\b\2\2\5\w\7\n\k\3\n\5\k\9\m\0\b\i\7\1\8\i\9\1\t\y\1\z\n\z\5\x\7\m\i\s\3\z\d\8\v\j\m\a\4\x\o\p\u\5\l\7\6\i\b\a\5\x\x\b\6\a\f\6\j\a\q\1\6\0\q\6\l\9\u\y\k\y\f\o\h\d\y\d\o\6\r\t\0\m\d\r\1\z\e\0\2\c\8\k\4\r\m\d\t\d\x\e\h\9\c\a\8\x\3\9\x\8\7\p\w\8\a\p\x\z\p\a\5\x\s\3\7\l\5\f\o\1\0\q\2\l\l\6\c\8\0\t\k\5\h\d\m\r\f\v\j\j\a\w\3\l\f\o\9\i\2\n\q\j\t\g\1\d\1\w\a\u\a\j\e\n\7\d\t\e\3\k\u\8\r\j\q\5\q\w\s\g\q\1\x\y\2\7\r\m\4\m\2\w\c\c\i\v\2\n\c\b\v\y\c\q\g\1\p\q\v\8\t\x\g\w\d\4\u\j\4\z\x\4\9\w\t\x\d\j\6\p\j\2\w\e ]] 00:07:42.936 13:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:43.503 13:26:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:43.503 13:26:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:43.503 13:26:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:43.503 13:26:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.503 [2024-11-20 13:26:55.264592] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:43.503 [2024-11-20 13:26:55.264696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61389 ] 00:07:43.503 { 00:07:43.503 "subsystems": [ 00:07:43.503 { 00:07:43.503 "subsystem": "bdev", 00:07:43.503 "config": [ 00:07:43.503 { 00:07:43.503 "params": { 00:07:43.503 "block_size": 512, 00:07:43.503 "num_blocks": 1048576, 00:07:43.503 "name": "malloc0" 00:07:43.503 }, 00:07:43.503 "method": "bdev_malloc_create" 00:07:43.503 }, 00:07:43.503 { 00:07:43.503 "params": { 00:07:43.503 "filename": "/dev/zram1", 00:07:43.503 "name": "uring0" 00:07:43.503 }, 00:07:43.503 "method": "bdev_uring_create" 00:07:43.503 }, 00:07:43.503 { 00:07:43.503 "method": "bdev_wait_for_examine" 00:07:43.503 } 00:07:43.503 ] 00:07:43.503 } 00:07:43.503 ] 00:07:43.503 } 00:07:43.503 [2024-11-20 13:26:55.412393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.761 [2024-11-20 13:26:55.483500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.762 [2024-11-20 13:26:55.541667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.135  [2024-11-20T13:26:58.024Z] Copying: 149/512 [MB] (149 MBps) [2024-11-20T13:26:59.013Z] Copying: 300/512 [MB] (150 MBps) [2024-11-20T13:26:59.272Z] Copying: 450/512 [MB] (149 MBps) [2024-11-20T13:26:59.839Z] Copying: 512/512 [MB] (average 150 MBps) 00:07:47.882 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.882 13:26:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:47.882 [2024-11-20 13:26:59.598962] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:47.882 [2024-11-20 13:26:59.599064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61450 ] 00:07:47.882 { 00:07:47.882 "subsystems": [ 00:07:47.882 { 00:07:47.882 "subsystem": "bdev", 00:07:47.882 "config": [ 00:07:47.882 { 00:07:47.882 "params": { 00:07:47.882 "block_size": 512, 00:07:47.882 "num_blocks": 1048576, 00:07:47.882 "name": "malloc0" 00:07:47.882 }, 00:07:47.882 "method": "bdev_malloc_create" 00:07:47.882 }, 00:07:47.882 { 00:07:47.882 "params": { 00:07:47.882 "filename": "/dev/zram1", 00:07:47.882 "name": "uring0" 00:07:47.882 }, 00:07:47.882 "method": "bdev_uring_create" 00:07:47.882 }, 00:07:47.882 { 00:07:47.882 "params": { 00:07:47.882 "name": "uring0" 00:07:47.882 }, 00:07:47.882 "method": "bdev_uring_delete" 00:07:47.882 }, 00:07:47.882 { 00:07:47.882 "method": "bdev_wait_for_examine" 00:07:47.882 } 00:07:47.882 ] 00:07:47.882 } 00:07:47.882 ] 00:07:47.882 } 00:07:47.882 [2024-11-20 13:26:59.738942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.882 [2024-11-20 13:26:59.802559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.140 [2024-11-20 13:26:59.857515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.140  [2024-11-20T13:27:00.664Z] Copying: 0/0 [B] (average 0 Bps) 00:07:48.707 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.707 13:27:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:48.707 { 00:07:48.707 "subsystems": [ 00:07:48.707 { 00:07:48.707 "subsystem": "bdev", 00:07:48.707 "config": [ 00:07:48.707 { 00:07:48.707 "params": { 00:07:48.707 "block_size": 512, 00:07:48.707 "num_blocks": 1048576, 00:07:48.707 "name": "malloc0" 00:07:48.707 }, 00:07:48.707 "method": "bdev_malloc_create" 00:07:48.707 }, 00:07:48.707 { 00:07:48.707 "params": { 00:07:48.707 "filename": "/dev/zram1", 00:07:48.707 "name": "uring0" 00:07:48.707 }, 00:07:48.707 "method": "bdev_uring_create" 00:07:48.707 }, 00:07:48.707 { 00:07:48.707 "params": { 00:07:48.707 "name": "uring0" 00:07:48.707 }, 00:07:48.707 "method": "bdev_uring_delete" 00:07:48.707 }, 00:07:48.707 { 00:07:48.707 "method": "bdev_wait_for_examine" 00:07:48.707 } 00:07:48.707 ] 00:07:48.707 } 00:07:48.707 ] 00:07:48.707 } 00:07:48.707 [2024-11-20 13:27:00.512753] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:48.707 [2024-11-20 13:27:00.512859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61475 ] 00:07:48.707 [2024-11-20 13:27:00.658033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.965 [2024-11-20 13:27:00.720389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.965 [2024-11-20 13:27:00.774451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.224 [2024-11-20 13:27:00.976783] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:49.224 [2024-11-20 13:27:00.976827] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:49.224 [2024-11-20 13:27:00.976839] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:49.224 [2024-11-20 13:27:00.976850] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.481 [2024-11-20 13:27:01.288407] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:49.481 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:49.738 00:07:49.738 real 0m15.764s 00:07:49.738 user 0m10.732s 00:07:49.738 sys 0m13.201s 00:07:49.738 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.738 13:27:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.738 ************************************ 00:07:49.738 END TEST dd_uring_copy 00:07:49.738 ************************************ 00:07:49.997 00:07:49.997 real 0m15.996s 00:07:49.997 user 0m10.864s 00:07:49.997 sys 0m13.306s 00:07:49.997 13:27:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.997 13:27:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:49.997 ************************************ 00:07:49.997 END TEST spdk_dd_uring 00:07:49.997 ************************************ 00:07:49.997 13:27:01 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:49.997 13:27:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.997 13:27:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.997 13:27:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.997 ************************************ 00:07:49.997 START TEST spdk_dd_sparse 00:07:49.997 ************************************ 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:49.997 * Looking for test storage... 00:07:49.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:49.997 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.256 --rc genhtml_branch_coverage=1 00:07:50.256 --rc genhtml_function_coverage=1 00:07:50.256 --rc genhtml_legend=1 00:07:50.256 --rc geninfo_all_blocks=1 00:07:50.256 --rc geninfo_unexecuted_blocks=1 00:07:50.256 00:07:50.256 ' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.256 --rc genhtml_branch_coverage=1 00:07:50.256 --rc genhtml_function_coverage=1 00:07:50.256 --rc genhtml_legend=1 00:07:50.256 --rc geninfo_all_blocks=1 00:07:50.256 --rc geninfo_unexecuted_blocks=1 00:07:50.256 00:07:50.256 ' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.256 --rc genhtml_branch_coverage=1 00:07:50.256 --rc genhtml_function_coverage=1 00:07:50.256 --rc genhtml_legend=1 00:07:50.256 --rc geninfo_all_blocks=1 00:07:50.256 --rc geninfo_unexecuted_blocks=1 00:07:50.256 00:07:50.256 ' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.256 --rc genhtml_branch_coverage=1 00:07:50.256 --rc genhtml_function_coverage=1 00:07:50.256 --rc genhtml_legend=1 00:07:50.256 --rc geninfo_all_blocks=1 00:07:50.256 --rc geninfo_unexecuted_blocks=1 00:07:50.256 00:07:50.256 ' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:50.256 1+0 records in 00:07:50.256 1+0 records out 00:07:50.256 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00437247 s, 959 MB/s 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:50.256 1+0 records in 00:07:50.256 1+0 records out 00:07:50.256 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00668869 s, 627 MB/s 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:50.256 1+0 records in 00:07:50.256 1+0 records out 00:07:50.256 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00622665 s, 674 MB/s 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.256 13:27:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:50.256 ************************************ 00:07:50.256 START TEST dd_sparse_file_to_file 00:07:50.256 ************************************ 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:50.256 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:50.256 [2024-11-20 13:27:02.062612] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:50.256 [2024-11-20 13:27:02.062743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:07:50.256 { 00:07:50.256 "subsystems": [ 00:07:50.256 { 00:07:50.256 "subsystem": "bdev", 00:07:50.256 "config": [ 00:07:50.256 { 00:07:50.256 "params": { 00:07:50.256 "block_size": 4096, 00:07:50.256 "filename": "dd_sparse_aio_disk", 00:07:50.256 "name": "dd_aio" 00:07:50.256 }, 00:07:50.256 "method": "bdev_aio_create" 00:07:50.256 }, 00:07:50.256 { 00:07:50.256 "params": { 00:07:50.256 "lvs_name": "dd_lvstore", 00:07:50.256 "bdev_name": "dd_aio" 00:07:50.256 }, 00:07:50.256 "method": "bdev_lvol_create_lvstore" 00:07:50.256 }, 00:07:50.256 { 00:07:50.256 "method": "bdev_wait_for_examine" 00:07:50.256 } 00:07:50.256 ] 00:07:50.256 } 00:07:50.256 ] 00:07:50.256 } 00:07:50.526 [2024-11-20 13:27:02.211778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.526 [2024-11-20 13:27:02.273939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.526 [2024-11-20 13:27:02.328369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.526  [2024-11-20T13:27:02.747Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:50.790 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:50.790 00:07:50.790 real 0m0.658s 00:07:50.790 user 0m0.406s 00:07:50.790 sys 0m0.351s 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:50.790 ************************************ 00:07:50.790 END TEST dd_sparse_file_to_file 00:07:50.790 ************************************ 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:50.790 ************************************ 00:07:50.790 START TEST dd_sparse_file_to_bdev 00:07:50.790 ************************************ 00:07:50.790 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:50.791 13:27:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.048 [2024-11-20 13:27:02.754590] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:51.048 [2024-11-20 13:27:02.754689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:07:51.048 { 00:07:51.048 "subsystems": [ 00:07:51.048 { 00:07:51.048 "subsystem": "bdev", 00:07:51.048 "config": [ 00:07:51.048 { 00:07:51.048 "params": { 00:07:51.048 "block_size": 4096, 00:07:51.048 "filename": "dd_sparse_aio_disk", 00:07:51.048 "name": "dd_aio" 00:07:51.048 }, 00:07:51.048 "method": "bdev_aio_create" 00:07:51.048 }, 00:07:51.048 { 00:07:51.048 "params": { 00:07:51.048 "lvs_name": "dd_lvstore", 00:07:51.048 "lvol_name": "dd_lvol", 00:07:51.048 "size_in_mib": 36, 00:07:51.048 "thin_provision": true 00:07:51.048 }, 00:07:51.048 "method": "bdev_lvol_create" 00:07:51.048 }, 00:07:51.048 { 00:07:51.048 "method": "bdev_wait_for_examine" 00:07:51.048 } 00:07:51.048 ] 00:07:51.048 } 00:07:51.048 ] 00:07:51.048 } 00:07:51.048 [2024-11-20 13:27:02.899376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.048 [2024-11-20 13:27:02.961752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.306 [2024-11-20 13:27:03.015408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.306  [2024-11-20T13:27:03.521Z] Copying: 12/36 [MB] (average 545 MBps) 00:07:51.564 00:07:51.564 00:07:51.564 real 0m0.611s 00:07:51.564 user 0m0.396s 00:07:51.564 sys 0m0.329s 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 ************************************ 00:07:51.564 END TEST dd_sparse_file_to_bdev 00:07:51.564 ************************************ 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 ************************************ 00:07:51.564 START TEST dd_sparse_bdev_to_file 00:07:51.564 ************************************ 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:51.564 13:27:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:51.564 [2024-11-20 13:27:03.417990] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:51.564 [2024-11-20 13:27:03.418112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61657 ] 00:07:51.564 { 00:07:51.564 "subsystems": [ 00:07:51.564 { 00:07:51.564 "subsystem": "bdev", 00:07:51.564 "config": [ 00:07:51.564 { 00:07:51.564 "params": { 00:07:51.564 "block_size": 4096, 00:07:51.564 "filename": "dd_sparse_aio_disk", 00:07:51.564 "name": "dd_aio" 00:07:51.564 }, 00:07:51.564 "method": "bdev_aio_create" 00:07:51.564 }, 00:07:51.564 { 00:07:51.564 "method": "bdev_wait_for_examine" 00:07:51.564 } 00:07:51.564 ] 00:07:51.564 } 00:07:51.564 ] 00:07:51.564 } 00:07:51.822 [2024-11-20 13:27:03.566603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.822 [2024-11-20 13:27:03.632051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.822 [2024-11-20 13:27:03.685551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.822  [2024-11-20T13:27:04.036Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:52.079 00:07:52.079 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:52.079 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:52.335 00:07:52.335 real 0m0.680s 00:07:52.335 user 0m0.429s 00:07:52.335 sys 0m0.352s 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:52.335 ************************************ 00:07:52.335 END TEST dd_sparse_bdev_to_file 00:07:52.335 ************************************ 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:52.335 00:07:52.335 real 0m2.335s 00:07:52.335 user 0m1.399s 00:07:52.335 sys 0m1.247s 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.335 13:27:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:52.335 ************************************ 00:07:52.335 END TEST spdk_dd_sparse 00:07:52.335 ************************************ 00:07:52.335 13:27:04 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:52.335 13:27:04 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.335 13:27:04 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.335 13:27:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:52.335 ************************************ 00:07:52.335 START TEST spdk_dd_negative 00:07:52.335 ************************************ 00:07:52.335 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:52.335 * Looking for test storage... 00:07:52.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:52.335 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.335 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.335 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.593 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.594 --rc genhtml_branch_coverage=1 00:07:52.594 --rc genhtml_function_coverage=1 00:07:52.594 --rc genhtml_legend=1 00:07:52.594 --rc geninfo_all_blocks=1 00:07:52.594 --rc geninfo_unexecuted_blocks=1 00:07:52.594 00:07:52.594 ' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.594 --rc genhtml_branch_coverage=1 00:07:52.594 --rc genhtml_function_coverage=1 00:07:52.594 --rc genhtml_legend=1 00:07:52.594 --rc geninfo_all_blocks=1 00:07:52.594 --rc geninfo_unexecuted_blocks=1 00:07:52.594 00:07:52.594 ' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.594 --rc genhtml_branch_coverage=1 00:07:52.594 --rc genhtml_function_coverage=1 00:07:52.594 --rc genhtml_legend=1 00:07:52.594 --rc geninfo_all_blocks=1 00:07:52.594 --rc geninfo_unexecuted_blocks=1 00:07:52.594 00:07:52.594 ' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.594 --rc genhtml_branch_coverage=1 00:07:52.594 --rc genhtml_function_coverage=1 00:07:52.594 --rc genhtml_legend=1 00:07:52.594 --rc geninfo_all_blocks=1 00:07:52.594 --rc geninfo_unexecuted_blocks=1 00:07:52.594 00:07:52.594 ' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.594 ************************************ 00:07:52.594 START TEST dd_invalid_arguments 00:07:52.594 ************************************ 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.594 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:52.594 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:52.594 00:07:52.594 CPU options: 00:07:52.594 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:52.594 (like [0,1,10]) 00:07:52.594 --lcores lcore to CPU mapping list. The list is in the format: 00:07:52.594 [<,lcores[@CPUs]>...] 00:07:52.594 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:52.594 Within the group, '-' is used for range separator, 00:07:52.594 ',' is used for single number separator. 00:07:52.594 '( )' can be omitted for single element group, 00:07:52.594 '@' can be omitted if cpus and lcores have the same value 00:07:52.594 --disable-cpumask-locks Disable CPU core lock files. 00:07:52.594 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:52.594 pollers in the app support interrupt mode) 00:07:52.594 -p, --main-core main (primary) core for DPDK 00:07:52.594 00:07:52.594 Configuration options: 00:07:52.594 -c, --config, --json JSON config file 00:07:52.594 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:52.594 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:52.594 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:52.594 --rpcs-allowed comma-separated list of permitted RPCS 00:07:52.594 --json-ignore-init-errors don't exit on invalid config entry 00:07:52.594 00:07:52.594 Memory options: 00:07:52.594 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:52.594 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:52.594 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:52.594 -R, --huge-unlink unlink huge files after initialization 00:07:52.594 -n, --mem-channels number of memory channels used for DPDK 00:07:52.594 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:52.594 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:52.594 --no-huge run without using hugepages 00:07:52.594 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:52.594 -i, --shm-id shared memory ID (optional) 00:07:52.594 -g, --single-file-segments force creating just one hugetlbfs file 00:07:52.594 00:07:52.594 PCI options: 00:07:52.594 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:52.594 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:52.594 -u, --no-pci disable PCI access 00:07:52.594 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:52.594 00:07:52.594 Log options: 00:07:52.594 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:52.594 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:52.594 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:52.594 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:52.594 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:52.595 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:52.595 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:52.595 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:52.595 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:52.595 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:52.595 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:52.595 --silence-noticelog disable notice level logging to stderr 00:07:52.595 00:07:52.595 Trace options: 00:07:52.595 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:52.595 setting 0 to disable trace (default 32768) 00:07:52.595 Tracepoints vary in size and can use more than one trace entry. 00:07:52.595 -e, --tpoint-group [:] 00:07:52.595 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:52.595 [2024-11-20 13:27:04.388532] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:52.595 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:52.595 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:52.595 bdev_raid, scheduler, all). 00:07:52.595 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:52.595 a tracepoint group. First tpoint inside a group can be enabled by 00:07:52.595 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:52.595 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:52.595 in /include/spdk_internal/trace_defs.h 00:07:52.595 00:07:52.595 Other options: 00:07:52.595 -h, --help show this usage 00:07:52.595 -v, --version print SPDK version 00:07:52.595 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:52.595 --env-context Opaque context for use of the env implementation 00:07:52.595 00:07:52.595 Application specific: 00:07:52.595 [--------- DD Options ---------] 00:07:52.595 --if Input file. Must specify either --if or --ib. 00:07:52.595 --ib Input bdev. Must specifier either --if or --ib 00:07:52.595 --of Output file. Must specify either --of or --ob. 00:07:52.595 --ob Output bdev. Must specify either --of or --ob. 00:07:52.595 --iflag Input file flags. 00:07:52.595 --oflag Output file flags. 00:07:52.595 --bs I/O unit size (default: 4096) 00:07:52.595 --qd Queue depth (default: 2) 00:07:52.595 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:52.595 --skip Skip this many I/O units at start of input. (default: 0) 00:07:52.595 --seek Skip this many I/O units at start of output. (default: 0) 00:07:52.595 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:52.595 --sparse Enable hole skipping in input target 00:07:52.595 Available iflag and oflag values: 00:07:52.595 append - append mode 00:07:52.595 direct - use direct I/O for data 00:07:52.595 directory - fail unless a directory 00:07:52.595 dsync - use synchronized I/O for data 00:07:52.595 noatime - do not update access time 00:07:52.595 noctty - do not assign controlling terminal from file 00:07:52.595 nofollow - do not follow symlinks 00:07:52.595 nonblock - use non-blocking I/O 00:07:52.595 sync - use synchronized I/O for data and metadata 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.595 00:07:52.595 real 0m0.070s 00:07:52.595 user 0m0.042s 00:07:52.595 sys 0m0.026s 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:52.595 ************************************ 00:07:52.595 END TEST dd_invalid_arguments 00:07:52.595 ************************************ 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.595 ************************************ 00:07:52.595 START TEST dd_double_input 00:07:52.595 ************************************ 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:52.595 [2024-11-20 13:27:04.501065] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.595 00:07:52.595 real 0m0.067s 00:07:52.595 user 0m0.040s 00:07:52.595 sys 0m0.026s 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.595 13:27:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:52.595 ************************************ 00:07:52.595 END TEST dd_double_input 00:07:52.595 ************************************ 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.852 ************************************ 00:07:52.852 START TEST dd_double_output 00:07:52.852 ************************************ 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:52.852 [2024-11-20 13:27:04.614844] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.852 00:07:52.852 real 0m0.066s 00:07:52.852 user 0m0.040s 00:07:52.852 sys 0m0.026s 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:52.852 ************************************ 00:07:52.852 END TEST dd_double_output 00:07:52.852 ************************************ 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.852 ************************************ 00:07:52.852 START TEST dd_no_input 00:07:52.852 ************************************ 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.852 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:52.853 [2024-11-20 13:27:04.736836] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.853 00:07:52.853 real 0m0.078s 00:07:52.853 user 0m0.051s 00:07:52.853 sys 0m0.026s 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:52.853 ************************************ 00:07:52.853 END TEST dd_no_input 00:07:52.853 ************************************ 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.853 ************************************ 00:07:52.853 START TEST dd_no_output 00:07:52.853 ************************************ 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.853 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.111 [2024-11-20 13:27:04.857397] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.111 00:07:53.111 real 0m0.072s 00:07:53.111 user 0m0.045s 00:07:53.111 sys 0m0.025s 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:53.111 ************************************ 00:07:53.111 END TEST dd_no_output 00:07:53.111 ************************************ 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.111 ************************************ 00:07:53.111 START TEST dd_wrong_blocksize 00:07:53.111 ************************************ 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.111 13:27:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:53.111 [2024-11-20 13:27:04.983778] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.111 00:07:53.111 real 0m0.080s 00:07:53.111 user 0m0.047s 00:07:53.111 sys 0m0.031s 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.111 ************************************ 00:07:53.111 END TEST dd_wrong_blocksize 00:07:53.111 ************************************ 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.111 ************************************ 00:07:53.111 START TEST dd_smaller_blocksize 00:07:53.111 ************************************ 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.111 13:27:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:53.369 [2024-11-20 13:27:05.120693] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:53.369 [2024-11-20 13:27:05.120791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61888 ] 00:07:53.369 [2024-11-20 13:27:05.263793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.627 [2024-11-20 13:27:05.333278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.627 [2024-11-20 13:27:05.390447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.884 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:54.142 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:54.142 [2024-11-20 13:27:06.004950] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:54.142 [2024-11-20 13:27:06.005050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.400 [2024-11-20 13:27:06.127633] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.400 00:07:54.400 real 0m1.143s 00:07:54.400 user 0m0.432s 00:07:54.400 sys 0m0.600s 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.400 ************************************ 00:07:54.400 END TEST dd_smaller_blocksize 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:54.400 ************************************ 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:54.400 ************************************ 00:07:54.400 START TEST dd_invalid_count 00:07:54.400 ************************************ 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:54.400 [2024-11-20 13:27:06.304384] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.400 ************************************ 00:07:54.400 END TEST dd_invalid_count 00:07:54.400 ************************************ 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.400 00:07:54.400 real 0m0.079s 00:07:54.400 user 0m0.052s 00:07:54.400 sys 0m0.025s 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.400 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:54.659 ************************************ 00:07:54.659 START TEST dd_invalid_oflag 00:07:54.659 ************************************ 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:54.659 [2024-11-20 13:27:06.442609] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.659 00:07:54.659 real 0m0.091s 00:07:54.659 user 0m0.069s 00:07:54.659 sys 0m0.020s 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.659 ************************************ 00:07:54.659 END TEST dd_invalid_oflag 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:54.659 ************************************ 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:54.659 ************************************ 00:07:54.659 START TEST dd_invalid_iflag 00:07:54.659 ************************************ 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:54.659 [2024-11-20 13:27:06.583165] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.659 00:07:54.659 real 0m0.083s 00:07:54.659 user 0m0.053s 00:07:54.659 sys 0m0.027s 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.659 13:27:06 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:54.659 ************************************ 00:07:54.659 END TEST dd_invalid_iflag 00:07:54.659 ************************************ 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:54.918 ************************************ 00:07:54.918 START TEST dd_unknown_flag 00:07:54.918 ************************************ 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.918 13:27:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:54.918 [2024-11-20 13:27:06.722363] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:54.918 [2024-11-20 13:27:06.722493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:07:54.918 [2024-11-20 13:27:06.869036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.176 [2024-11-20 13:27:06.931992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.176 [2024-11-20 13:27:06.986402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.176 [2024-11-20 13:27:07.025286] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:55.176 [2024-11-20 13:27:07.025589] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.176 [2024-11-20 13:27:07.025671] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:55.176 [2024-11-20 13:27:07.025686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.176 [2024-11-20 13:27:07.025922] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:55.176 [2024-11-20 13:27:07.025940] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.176 [2024-11-20 13:27:07.026001] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:55.176 [2024-11-20 13:27:07.026013] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:55.435 [2024-11-20 13:27:07.145969] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.435 00:07:55.435 real 0m0.563s 00:07:55.435 user 0m0.302s 00:07:55.435 sys 0m0.160s 00:07:55.435 ************************************ 00:07:55.435 END TEST dd_unknown_flag 00:07:55.435 ************************************ 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:55.435 ************************************ 00:07:55.435 START TEST dd_invalid_json 00:07:55.435 ************************************ 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.435 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:55.435 [2024-11-20 13:27:07.325534] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:55.435 [2024-11-20 13:27:07.325638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:07:55.693 [2024-11-20 13:27:07.477585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.693 [2024-11-20 13:27:07.546835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.693 [2024-11-20 13:27:07.546929] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:55.693 [2024-11-20 13:27:07.546951] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:55.693 [2024-11-20 13:27:07.546963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.693 [2024-11-20 13:27:07.547009] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.693 ************************************ 00:07:55.693 END TEST dd_invalid_json 00:07:55.693 ************************************ 00:07:55.693 00:07:55.693 real 0m0.355s 00:07:55.693 user 0m0.192s 00:07:55.693 sys 0m0.061s 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.693 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:55.951 ************************************ 00:07:55.951 START TEST dd_invalid_seek 00:07:55.951 ************************************ 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.951 13:27:07 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:55.951 { 00:07:55.951 "subsystems": [ 00:07:55.951 { 00:07:55.951 "subsystem": "bdev", 00:07:55.951 "config": [ 00:07:55.951 { 00:07:55.951 "params": { 00:07:55.951 "block_size": 512, 00:07:55.951 "num_blocks": 512, 00:07:55.951 "name": "malloc0" 00:07:55.951 }, 00:07:55.951 "method": "bdev_malloc_create" 00:07:55.951 }, 00:07:55.951 { 00:07:55.951 "params": { 00:07:55.951 "block_size": 512, 00:07:55.951 "num_blocks": 512, 00:07:55.951 "name": "malloc1" 00:07:55.951 }, 00:07:55.951 "method": "bdev_malloc_create" 00:07:55.951 }, 00:07:55.951 { 00:07:55.951 "method": "bdev_wait_for_examine" 00:07:55.951 } 00:07:55.951 ] 00:07:55.951 } 00:07:55.951 ] 00:07:55.951 } 00:07:55.951 [2024-11-20 13:27:07.744477] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:55.951 [2024-11-20 13:27:07.744592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:07:55.951 [2024-11-20 13:27:07.895356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.208 [2024-11-20 13:27:07.966322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.208 [2024-11-20 13:27:08.025262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.208 [2024-11-20 13:27:08.094462] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:56.208 [2024-11-20 13:27:08.094542] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.466 [2024-11-20 13:27:08.223671] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:56.466 ************************************ 00:07:56.466 END TEST dd_invalid_seek 00:07:56.466 ************************************ 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.466 00:07:56.466 real 0m0.619s 00:07:56.466 user 0m0.401s 00:07:56.466 sys 0m0.170s 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.466 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 ************************************ 00:07:56.466 START TEST dd_invalid_skip 00:07:56.466 ************************************ 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.467 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:56.467 { 00:07:56.467 "subsystems": [ 00:07:56.467 { 00:07:56.467 "subsystem": "bdev", 00:07:56.467 "config": [ 00:07:56.467 { 00:07:56.467 "params": { 00:07:56.467 "block_size": 512, 00:07:56.467 "num_blocks": 512, 00:07:56.467 "name": "malloc0" 00:07:56.467 }, 00:07:56.467 "method": "bdev_malloc_create" 00:07:56.467 }, 00:07:56.467 { 00:07:56.467 "params": { 00:07:56.467 "block_size": 512, 00:07:56.467 "num_blocks": 512, 00:07:56.467 "name": "malloc1" 00:07:56.467 }, 00:07:56.467 "method": "bdev_malloc_create" 00:07:56.467 }, 00:07:56.467 { 00:07:56.467 "method": "bdev_wait_for_examine" 00:07:56.467 } 00:07:56.467 ] 00:07:56.467 } 00:07:56.467 ] 00:07:56.467 } 00:07:56.467 [2024-11-20 13:27:08.412882] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:56.467 [2024-11-20 13:27:08.413027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62078 ] 00:07:56.724 [2024-11-20 13:27:08.572850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.724 [2024-11-20 13:27:08.641659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.981 [2024-11-20 13:27:08.698891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.981 [2024-11-20 13:27:08.767431] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:56.981 [2024-11-20 13:27:08.767514] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.981 [2024-11-20 13:27:08.893946] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.239 00:07:57.239 real 0m0.619s 00:07:57.239 user 0m0.406s 00:07:57.239 sys 0m0.163s 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 ************************************ 00:07:57.239 END TEST dd_invalid_skip 00:07:57.239 ************************************ 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.239 13:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 ************************************ 00:07:57.239 START TEST dd_invalid_input_count 00:07:57.239 ************************************ 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.239 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:57.239 [2024-11-20 13:27:09.067232] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:57.239 [2024-11-20 13:27:09.067344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:07:57.239 { 00:07:57.239 "subsystems": [ 00:07:57.239 { 00:07:57.239 "subsystem": "bdev", 00:07:57.239 "config": [ 00:07:57.239 { 00:07:57.239 "params": { 00:07:57.239 "block_size": 512, 00:07:57.239 "num_blocks": 512, 00:07:57.239 "name": "malloc0" 00:07:57.239 }, 00:07:57.239 "method": "bdev_malloc_create" 00:07:57.239 }, 00:07:57.239 { 00:07:57.239 "params": { 00:07:57.239 "block_size": 512, 00:07:57.239 "num_blocks": 512, 00:07:57.239 "name": "malloc1" 00:07:57.239 }, 00:07:57.239 "method": "bdev_malloc_create" 00:07:57.239 }, 00:07:57.239 { 00:07:57.239 "method": "bdev_wait_for_examine" 00:07:57.239 } 00:07:57.239 ] 00:07:57.239 } 00:07:57.239 ] 00:07:57.239 } 00:07:57.496 [2024-11-20 13:27:09.216951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.496 [2024-11-20 13:27:09.278215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.496 [2024-11-20 13:27:09.332263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.496 [2024-11-20 13:27:09.397247] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:57.496 [2024-11-20 13:27:09.397512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.754 [2024-11-20 13:27:09.520597] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:57.754 ************************************ 00:07:57.754 END TEST dd_invalid_input_count 00:07:57.754 ************************************ 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.754 00:07:57.754 real 0m0.582s 00:07:57.754 user 0m0.386s 00:07:57.754 sys 0m0.156s 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.754 ************************************ 00:07:57.754 START TEST dd_invalid_output_count 00:07:57.754 ************************************ 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.754 13:27:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:57.754 { 00:07:57.754 "subsystems": [ 00:07:57.754 { 00:07:57.754 "subsystem": "bdev", 00:07:57.754 "config": [ 00:07:57.754 { 00:07:57.754 "params": { 00:07:57.754 "block_size": 512, 00:07:57.754 "num_blocks": 512, 00:07:57.754 "name": "malloc0" 00:07:57.754 }, 00:07:57.754 "method": "bdev_malloc_create" 00:07:57.754 }, 00:07:57.754 { 00:07:57.754 "method": "bdev_wait_for_examine" 00:07:57.754 } 00:07:57.754 ] 00:07:57.754 } 00:07:57.754 ] 00:07:57.754 } 00:07:57.754 [2024-11-20 13:27:09.700061] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:57.754 [2024-11-20 13:27:09.700158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:07:58.012 [2024-11-20 13:27:09.849117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.012 [2024-11-20 13:27:09.912104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.012 [2024-11-20 13:27:09.965857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.269 [2024-11-20 13:27:10.022743] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:58.269 [2024-11-20 13:27:10.022822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.269 [2024-11-20 13:27:10.145095] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.269 00:07:58.269 real 0m0.575s 00:07:58.269 user 0m0.375s 00:07:58.269 sys 0m0.156s 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.269 13:27:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:58.269 ************************************ 00:07:58.269 END TEST dd_invalid_output_count 00:07:58.269 ************************************ 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 ************************************ 00:07:58.527 START TEST dd_bs_not_multiple 00:07:58.527 ************************************ 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.527 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:58.527 { 00:07:58.527 "subsystems": [ 00:07:58.527 { 00:07:58.527 "subsystem": "bdev", 00:07:58.527 "config": [ 00:07:58.527 { 00:07:58.527 "params": { 00:07:58.527 "block_size": 512, 00:07:58.527 "num_blocks": 512, 00:07:58.527 "name": "malloc0" 00:07:58.527 }, 00:07:58.527 "method": "bdev_malloc_create" 00:07:58.527 }, 00:07:58.527 { 00:07:58.527 "params": { 00:07:58.527 "block_size": 512, 00:07:58.527 "num_blocks": 512, 00:07:58.527 "name": "malloc1" 00:07:58.527 }, 00:07:58.527 "method": "bdev_malloc_create" 00:07:58.527 }, 00:07:58.527 { 00:07:58.527 "method": "bdev_wait_for_examine" 00:07:58.527 } 00:07:58.527 ] 00:07:58.527 } 00:07:58.527 ] 00:07:58.527 } 00:07:58.527 [2024-11-20 13:27:10.328828] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:07:58.528 [2024-11-20 13:27:10.329141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62183 ] 00:07:58.528 [2024-11-20 13:27:10.476560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.785 [2024-11-20 13:27:10.538961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.785 [2024-11-20 13:27:10.593451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.785 [2024-11-20 13:27:10.659758] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:58.785 [2024-11-20 13:27:10.659830] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.044 [2024-11-20 13:27:10.783442] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.044 00:07:59.044 real 0m0.586s 00:07:59.044 user 0m0.378s 00:07:59.044 sys 0m0.159s 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:59.044 ************************************ 00:07:59.044 END TEST dd_bs_not_multiple 00:07:59.044 ************************************ 00:07:59.044 ************************************ 00:07:59.044 END TEST spdk_dd_negative 00:07:59.044 ************************************ 00:07:59.044 00:07:59.044 real 0m6.751s 00:07:59.044 user 0m3.709s 00:07:59.044 sys 0m2.448s 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.044 13:27:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.044 ************************************ 00:07:59.044 END TEST spdk_dd 00:07:59.044 ************************************ 00:07:59.044 00:07:59.044 real 1m22.214s 00:07:59.044 user 0m52.941s 00:07:59.044 sys 0m36.526s 00:07:59.044 13:27:10 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.044 13:27:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:59.044 13:27:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:59.044 13:27:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:59.044 13:27:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:59.044 13:27:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.044 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.303 13:27:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:59.303 13:27:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:59.303 13:27:11 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:59.303 13:27:11 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:59.303 13:27:11 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:59.303 13:27:11 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:59.303 13:27:11 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.303 13:27:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.303 13:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.303 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.303 ************************************ 00:07:59.303 START TEST nvmf_tcp 00:07:59.303 ************************************ 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.303 * Looking for test storage... 00:07:59.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.303 13:27:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.303 --rc genhtml_branch_coverage=1 00:07:59.303 --rc genhtml_function_coverage=1 00:07:59.303 --rc genhtml_legend=1 00:07:59.303 --rc geninfo_all_blocks=1 00:07:59.303 --rc geninfo_unexecuted_blocks=1 00:07:59.303 00:07:59.303 ' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.303 --rc genhtml_branch_coverage=1 00:07:59.303 --rc genhtml_function_coverage=1 00:07:59.303 --rc genhtml_legend=1 00:07:59.303 --rc geninfo_all_blocks=1 00:07:59.303 --rc geninfo_unexecuted_blocks=1 00:07:59.303 00:07:59.303 ' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.303 --rc genhtml_branch_coverage=1 00:07:59.303 --rc genhtml_function_coverage=1 00:07:59.303 --rc genhtml_legend=1 00:07:59.303 --rc geninfo_all_blocks=1 00:07:59.303 --rc geninfo_unexecuted_blocks=1 00:07:59.303 00:07:59.303 ' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.303 --rc genhtml_branch_coverage=1 00:07:59.303 --rc genhtml_function_coverage=1 00:07:59.303 --rc genhtml_legend=1 00:07:59.303 --rc geninfo_all_blocks=1 00:07:59.303 --rc geninfo_unexecuted_blocks=1 00:07:59.303 00:07:59.303 ' 00:07:59.303 13:27:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:59.303 13:27:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.303 13:27:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.303 13:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.303 ************************************ 00:07:59.303 START TEST nvmf_target_core 00:07:59.303 ************************************ 00:07:59.303 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:59.602 * Looking for test storage... 00:07:59.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.602 --rc genhtml_branch_coverage=1 00:07:59.602 --rc genhtml_function_coverage=1 00:07:59.602 --rc genhtml_legend=1 00:07:59.602 --rc geninfo_all_blocks=1 00:07:59.602 --rc geninfo_unexecuted_blocks=1 00:07:59.602 00:07:59.602 ' 00:07:59.602 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.603 --rc genhtml_branch_coverage=1 00:07:59.603 --rc genhtml_function_coverage=1 00:07:59.603 --rc genhtml_legend=1 00:07:59.603 --rc geninfo_all_blocks=1 00:07:59.603 --rc geninfo_unexecuted_blocks=1 00:07:59.603 00:07:59.603 ' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.603 --rc genhtml_branch_coverage=1 00:07:59.603 --rc genhtml_function_coverage=1 00:07:59.603 --rc genhtml_legend=1 00:07:59.603 --rc geninfo_all_blocks=1 00:07:59.603 --rc geninfo_unexecuted_blocks=1 00:07:59.603 00:07:59.603 ' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.603 --rc genhtml_branch_coverage=1 00:07:59.603 --rc genhtml_function_coverage=1 00:07:59.603 --rc genhtml_legend=1 00:07:59.603 --rc geninfo_all_blocks=1 00:07:59.603 --rc geninfo_unexecuted_blocks=1 00:07:59.603 00:07:59.603 ' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.603 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.603 ************************************ 00:07:59.603 START TEST nvmf_host_management 00:07:59.603 ************************************ 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.603 * Looking for test storage... 00:07:59.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.603 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.862 --rc genhtml_branch_coverage=1 00:07:59.862 --rc genhtml_function_coverage=1 00:07:59.862 --rc genhtml_legend=1 00:07:59.862 --rc geninfo_all_blocks=1 00:07:59.862 --rc geninfo_unexecuted_blocks=1 00:07:59.862 00:07:59.862 ' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.862 --rc genhtml_branch_coverage=1 00:07:59.862 --rc genhtml_function_coverage=1 00:07:59.862 --rc genhtml_legend=1 00:07:59.862 --rc geninfo_all_blocks=1 00:07:59.862 --rc geninfo_unexecuted_blocks=1 00:07:59.862 00:07:59.862 ' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.862 --rc genhtml_branch_coverage=1 00:07:59.862 --rc genhtml_function_coverage=1 00:07:59.862 --rc genhtml_legend=1 00:07:59.862 --rc geninfo_all_blocks=1 00:07:59.862 --rc geninfo_unexecuted_blocks=1 00:07:59.862 00:07:59.862 ' 00:07:59.862 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.862 --rc genhtml_branch_coverage=1 00:07:59.862 --rc genhtml_function_coverage=1 00:07:59.862 --rc genhtml_legend=1 00:07:59.862 --rc geninfo_all_blocks=1 00:07:59.862 --rc geninfo_unexecuted_blocks=1 00:07:59.862 00:07:59.862 ' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.863 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.863 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:59.864 Cannot find device "nvmf_init_br" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:59.864 Cannot find device "nvmf_init_br2" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:59.864 Cannot find device "nvmf_tgt_br" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.864 Cannot find device "nvmf_tgt_br2" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:59.864 Cannot find device "nvmf_init_br" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:59.864 Cannot find device "nvmf_init_br2" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:59.864 Cannot find device "nvmf_tgt_br" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:59.864 Cannot find device "nvmf_tgt_br2" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:59.864 Cannot find device "nvmf_br" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:59.864 Cannot find device "nvmf_init_if" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:59.864 Cannot find device "nvmf_init_if2" 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:59.864 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.123 13:27:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:00.123 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:00.123 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.123 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:00.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:08:00.381 00:08:00.381 --- 10.0.0.3 ping statistics --- 00:08:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.381 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:00.381 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:00.381 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:00.381 00:08:00.381 --- 10.0.0.4 ping statistics --- 00:08:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.381 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:00.381 00:08:00.381 --- 10.0.0.1 ping statistics --- 00:08:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.381 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:00.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:08:00.381 00:08:00.381 --- 10.0.0.2 ping statistics --- 00:08:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.381 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:00.381 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62522 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62522 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62522 ']' 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.382 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.382 [2024-11-20 13:27:12.272265] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:00.382 [2024-11-20 13:27:12.272392] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.641 [2024-11-20 13:27:12.433130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.641 [2024-11-20 13:27:12.506653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.641 [2024-11-20 13:27:12.506717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.641 [2024-11-20 13:27:12.506741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.641 [2024-11-20 13:27:12.506751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.641 [2024-11-20 13:27:12.506760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.641 [2024-11-20 13:27:12.508078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.641 [2024-11-20 13:27:12.508237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.641 [2024-11-20 13:27:12.508375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.641 [2024-11-20 13:27:12.508389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.641 [2024-11-20 13:27:12.566008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 [2024-11-20 13:27:12.685717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 Malloc0 00:08:00.900 [2024-11-20 13:27:12.764902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62570 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62570 /var/tmp/bdevperf.sock 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62570 ']' 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.900 { 00:08:00.900 "params": { 00:08:00.900 "name": "Nvme$subsystem", 00:08:00.900 "trtype": "$TEST_TRANSPORT", 00:08:00.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.900 "adrfam": "ipv4", 00:08:00.900 "trsvcid": "$NVMF_PORT", 00:08:00.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.900 "hdgst": ${hdgst:-false}, 00:08:00.900 "ddgst": ${ddgst:-false} 00:08:00.900 }, 00:08:00.900 "method": "bdev_nvme_attach_controller" 00:08:00.900 } 00:08:00.900 EOF 00:08:00.900 )") 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:00.900 13:27:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.900 "params": { 00:08:00.900 "name": "Nvme0", 00:08:00.900 "trtype": "tcp", 00:08:00.900 "traddr": "10.0.0.3", 00:08:00.900 "adrfam": "ipv4", 00:08:00.900 "trsvcid": "4420", 00:08:00.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.900 "hdgst": false, 00:08:00.900 "ddgst": false 00:08:00.900 }, 00:08:00.900 "method": "bdev_nvme_attach_controller" 00:08:00.900 }' 00:08:01.158 [2024-11-20 13:27:12.899671] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:01.158 [2024-11-20 13:27:12.899824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:08:01.158 [2024-11-20 13:27:13.054379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.416 [2024-11-20 13:27:13.118532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.416 [2024-11-20 13:27:13.181690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.416 Running I/O for 10 seconds... 00:08:02.350 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.350 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:02.351 13:27:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 [2024-11-20 13:27:14.077314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.351 [2024-11-20 13:27:14.077939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.351 [2024-11-20 13:27:14.077951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.077960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.077972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.077981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.077992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.352 [2024-11-20 13:27:14.078469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 13:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:02.352 [2024-11-20 13:27:14.078651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.078983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.078997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.079013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.079029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.079055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.079072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.079091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.352 [2024-11-20 13:27:14.079106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.352 [2024-11-20 13:27:14.079123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126d130 is same with the state(6) to be set 00:08:02.352 task offset: 0 on job bdev=Nvme0n1 fails 00:08:02.352 00:08:02.352 Latency(us) 00:08:02.352 [2024-11-20T13:27:14.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.352 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:02.352 Job: Nvme0n1 ended in about 0.77 seconds with error 00:08:02.352 Verification LBA range: start 0x0 length 0x400 00:08:02.353 Nvme0n1 : 0.77 1329.26 83.08 83.08 0.00 44273.53 2651.23 39798.23 00:08:02.353 [2024-11-20T13:27:14.310Z] =================================================================================================================== 00:08:02.353 [2024-11-20T13:27:14.310Z] Total : 1329.26 83.08 83.08 0.00 44273.53 2651.23 39798.23 00:08:02.353 [2024-11-20 13:27:14.079374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:02.353 [2024-11-20 13:27:14.079400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.353 [2024-11-20 13:27:14.079413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:02.353 [2024-11-20 13:27:14.079422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.353 [2024-11-20 13:27:14.079432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:02.353 [2024-11-20 13:27:14.079442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.353 [2024-11-20 13:27:14.079452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:02.353 [2024-11-20 13:27:14.079461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.353 [2024-11-20 13:27:14.079470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272ce0 is same with the state(6) to be set 00:08:02.353 [2024-11-20 13:27:14.080571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:02.353 [2024-11-20 13:27:14.082961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.353 [2024-11-20 13:27:14.082986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272ce0 (9): Bad file descriptor 00:08:02.353 [2024-11-20 13:27:14.087527] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62570 00:08:03.287 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62570) - No such process 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.287 { 00:08:03.287 "params": { 00:08:03.287 "name": "Nvme$subsystem", 00:08:03.287 "trtype": "$TEST_TRANSPORT", 00:08:03.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.287 "adrfam": "ipv4", 00:08:03.287 "trsvcid": "$NVMF_PORT", 00:08:03.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.287 "hdgst": ${hdgst:-false}, 00:08:03.287 "ddgst": ${ddgst:-false} 00:08:03.287 }, 00:08:03.287 "method": "bdev_nvme_attach_controller" 00:08:03.287 } 00:08:03.287 EOF 00:08:03.287 )") 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:03.287 13:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:03.287 "params": { 00:08:03.287 "name": "Nvme0", 00:08:03.287 "trtype": "tcp", 00:08:03.287 "traddr": "10.0.0.3", 00:08:03.287 "adrfam": "ipv4", 00:08:03.287 "trsvcid": "4420", 00:08:03.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:03.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:03.287 "hdgst": false, 00:08:03.287 "ddgst": false 00:08:03.287 }, 00:08:03.287 "method": "bdev_nvme_attach_controller" 00:08:03.287 }' 00:08:03.287 [2024-11-20 13:27:15.140713] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:03.287 [2024-11-20 13:27:15.140813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62608 ] 00:08:03.545 [2024-11-20 13:27:15.287984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.545 [2024-11-20 13:27:15.357970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.545 [2024-11-20 13:27:15.423797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.803 Running I/O for 1 seconds... 00:08:04.737 1472.00 IOPS, 92.00 MiB/s 00:08:04.737 Latency(us) 00:08:04.737 [2024-11-20T13:27:16.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.737 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:04.737 Verification LBA range: start 0x0 length 0x400 00:08:04.737 Nvme0n1 : 1.04 1475.17 92.20 0.00 0.00 42531.79 4587.52 41466.41 00:08:04.737 [2024-11-20T13:27:16.694Z] =================================================================================================================== 00:08:04.737 [2024-11-20T13:27:16.694Z] Total : 1475.17 92.20 0.00 0.00 42531.79 4587.52 41466.41 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.995 rmmod nvme_tcp 00:08:04.995 rmmod nvme_fabrics 00:08:04.995 rmmod nvme_keyring 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62522 ']' 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62522 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62522 ']' 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62522 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.995 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62522 00:08:05.255 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.255 killing process with pid 62522 00:08:05.255 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.255 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62522' 00:08:05.255 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62522 00:08:05.255 13:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62522 00:08:05.512 [2024-11-20 13:27:17.277613] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:05.512 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:05.513 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:05.771 00:08:05.771 real 0m6.121s 00:08:05.771 user 0m21.838s 00:08:05.771 sys 0m1.764s 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.771 ************************************ 00:08:05.771 END TEST nvmf_host_management 00:08:05.771 ************************************ 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.771 ************************************ 00:08:05.771 START TEST nvmf_lvol 00:08:05.771 ************************************ 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.771 * Looking for test storage... 00:08:05.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.771 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.030 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.030 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.030 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.030 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.030 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.031 --rc genhtml_branch_coverage=1 00:08:06.031 --rc genhtml_function_coverage=1 00:08:06.031 --rc genhtml_legend=1 00:08:06.031 --rc geninfo_all_blocks=1 00:08:06.031 --rc geninfo_unexecuted_blocks=1 00:08:06.031 00:08:06.031 ' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.031 --rc genhtml_branch_coverage=1 00:08:06.031 --rc genhtml_function_coverage=1 00:08:06.031 --rc genhtml_legend=1 00:08:06.031 --rc geninfo_all_blocks=1 00:08:06.031 --rc geninfo_unexecuted_blocks=1 00:08:06.031 00:08:06.031 ' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.031 --rc genhtml_branch_coverage=1 00:08:06.031 --rc genhtml_function_coverage=1 00:08:06.031 --rc genhtml_legend=1 00:08:06.031 --rc geninfo_all_blocks=1 00:08:06.031 --rc geninfo_unexecuted_blocks=1 00:08:06.031 00:08:06.031 ' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.031 --rc genhtml_branch_coverage=1 00:08:06.031 --rc genhtml_function_coverage=1 00:08:06.031 --rc genhtml_legend=1 00:08:06.031 --rc geninfo_all_blocks=1 00:08:06.031 --rc geninfo_unexecuted_blocks=1 00:08:06.031 00:08:06.031 ' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.031 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:06.032 Cannot find device "nvmf_init_br" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:06.032 Cannot find device "nvmf_init_br2" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:06.032 Cannot find device "nvmf_tgt_br" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.032 Cannot find device "nvmf_tgt_br2" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:06.032 Cannot find device "nvmf_init_br" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:06.032 Cannot find device "nvmf_init_br2" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:06.032 Cannot find device "nvmf_tgt_br" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:06.032 Cannot find device "nvmf_tgt_br2" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:06.032 Cannot find device "nvmf_br" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:06.032 Cannot find device "nvmf_init_if" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:06.032 Cannot find device "nvmf_init_if2" 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.032 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:06.290 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.290 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.290 13:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:06.290 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:06.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:08:06.291 00:08:06.291 --- 10.0.0.3 ping statistics --- 00:08:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.291 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:06.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:06.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:06.291 00:08:06.291 --- 10.0.0.4 ping statistics --- 00:08:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.291 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:06.291 00:08:06.291 --- 10.0.0.1 ping statistics --- 00:08:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.291 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:06.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:06.291 00:08:06.291 --- 10.0.0.2 ping statistics --- 00:08:06.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.291 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62881 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62881 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62881 ']' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.291 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.549 [2024-11-20 13:27:18.295643] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:06.549 [2024-11-20 13:27:18.295730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.549 [2024-11-20 13:27:18.441492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:06.549 [2024-11-20 13:27:18.503806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.549 [2024-11-20 13:27:18.503857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.549 [2024-11-20 13:27:18.503869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.549 [2024-11-20 13:27:18.503877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.549 [2024-11-20 13:27:18.503885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.549 [2024-11-20 13:27:18.505036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.549 [2024-11-20 13:27:18.505101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.549 [2024-11-20 13:27:18.505110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.809 [2024-11-20 13:27:18.575580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.809 13:27:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.070 [2024-11-20 13:27:19.009983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.329 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.588 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:07.588 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.846 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:07.846 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.103 13:27:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.360 13:27:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9e9e403d-126a-429d-bef3-f411ca828c78 00:08:08.617 13:27:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9e9e403d-126a-429d-bef3-f411ca828c78 lvol 20 00:08:08.876 13:27:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3257dd55-e268-461e-8cf4-4e18b93b40f6 00:08:08.876 13:27:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.135 13:27:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3257dd55-e268-461e-8cf4-4e18b93b40f6 00:08:09.394 13:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:09.652 [2024-11-20 13:27:21.425946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.652 13:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:09.910 13:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62959 00:08:09.910 13:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:09.910 13:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:10.845 13:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3257dd55-e268-461e-8cf4-4e18b93b40f6 MY_SNAPSHOT 00:08:11.117 13:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cc3977b5-ff15-4b5f-8ab8-87cba495a255 00:08:11.117 13:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3257dd55-e268-461e-8cf4-4e18b93b40f6 30 00:08:11.683 13:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cc3977b5-ff15-4b5f-8ab8-87cba495a255 MY_CLONE 00:08:11.941 13:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e5b22306-9367-49df-a7a3-8861f92c6ee1 00:08:11.941 13:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e5b22306-9367-49df-a7a3-8861f92c6ee1 00:08:12.507 13:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62959 00:08:20.651 Initializing NVMe Controllers 00:08:20.651 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.651 Controller IO queue size 128, less than required. 00:08:20.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.651 Initialization complete. Launching workers. 00:08:20.651 ======================================================== 00:08:20.651 Latency(us) 00:08:20.651 Device Information : IOPS MiB/s Average min max 00:08:20.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10219.90 39.92 12531.76 2403.75 58591.21 00:08:20.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10376.80 40.53 12333.99 3329.93 66192.77 00:08:20.651 ======================================================== 00:08:20.651 Total : 20596.70 80.46 12432.12 2403.75 66192.77 00:08:20.651 00:08:20.651 13:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.651 13:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3257dd55-e268-461e-8cf4-4e18b93b40f6 00:08:20.909 13:27:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e9e403d-126a-429d-bef3-f411ca828c78 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.166 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.166 rmmod nvme_tcp 00:08:21.166 rmmod nvme_fabrics 00:08:21.423 rmmod nvme_keyring 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62881 ']' 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62881 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62881 ']' 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62881 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62881 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.423 killing process with pid 62881 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62881' 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62881 00:08:21.423 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62881 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.681 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:21.939 00:08:21.939 real 0m16.065s 00:08:21.939 user 1m6.094s 00:08:21.939 sys 0m4.387s 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.939 ************************************ 00:08:21.939 END TEST nvmf_lvol 00:08:21.939 ************************************ 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.939 ************************************ 00:08:21.939 START TEST nvmf_lvs_grow 00:08:21.939 ************************************ 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.939 * Looking for test storage... 00:08:21.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.939 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.199 --rc genhtml_branch_coverage=1 00:08:22.199 --rc genhtml_function_coverage=1 00:08:22.199 --rc genhtml_legend=1 00:08:22.199 --rc geninfo_all_blocks=1 00:08:22.199 --rc geninfo_unexecuted_blocks=1 00:08:22.199 00:08:22.199 ' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.199 --rc genhtml_branch_coverage=1 00:08:22.199 --rc genhtml_function_coverage=1 00:08:22.199 --rc genhtml_legend=1 00:08:22.199 --rc geninfo_all_blocks=1 00:08:22.199 --rc geninfo_unexecuted_blocks=1 00:08:22.199 00:08:22.199 ' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.199 --rc genhtml_branch_coverage=1 00:08:22.199 --rc genhtml_function_coverage=1 00:08:22.199 --rc genhtml_legend=1 00:08:22.199 --rc geninfo_all_blocks=1 00:08:22.199 --rc geninfo_unexecuted_blocks=1 00:08:22.199 00:08:22.199 ' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.199 --rc genhtml_branch_coverage=1 00:08:22.199 --rc genhtml_function_coverage=1 00:08:22.199 --rc genhtml_legend=1 00:08:22.199 --rc geninfo_all_blocks=1 00:08:22.199 --rc geninfo_unexecuted_blocks=1 00:08:22.199 00:08:22.199 ' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.199 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.200 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:22.200 Cannot find device "nvmf_init_br" 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:22.200 Cannot find device "nvmf_init_br2" 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:22.200 Cannot find device "nvmf_tgt_br" 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.200 Cannot find device "nvmf_tgt_br2" 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:22.200 13:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:22.200 Cannot find device "nvmf_init_br" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:22.200 Cannot find device "nvmf_init_br2" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:22.200 Cannot find device "nvmf_tgt_br" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:22.200 Cannot find device "nvmf_tgt_br2" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:22.200 Cannot find device "nvmf_br" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:22.200 Cannot find device "nvmf_init_if" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:22.200 Cannot find device "nvmf_init_if2" 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.200 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:22.459 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:22.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:22.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:08:22.460 00:08:22.460 --- 10.0.0.3 ping statistics --- 00:08:22.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.460 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:22.460 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:22.460 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:08:22.460 00:08:22.460 --- 10.0.0.4 ping statistics --- 00:08:22.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.460 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:22.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:22.460 00:08:22.460 --- 10.0.0.1 ping statistics --- 00:08:22.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.460 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:22.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:08:22.460 00:08:22.460 --- 10.0.0.2 ping statistics --- 00:08:22.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.460 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63338 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63338 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63338 ']' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.460 13:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.718 [2024-11-20 13:27:34.461306] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:22.718 [2024-11-20 13:27:34.462122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.718 [2024-11-20 13:27:34.614336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.976 [2024-11-20 13:27:34.696037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.976 [2024-11-20 13:27:34.696113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.976 [2024-11-20 13:27:34.696130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.976 [2024-11-20 13:27:34.696142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.976 [2024-11-20 13:27:34.696153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.976 [2024-11-20 13:27:34.696686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.976 [2024-11-20 13:27:34.752250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.542 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.542 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:23.542 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.542 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.542 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.800 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.058 [2024-11-20 13:27:35.772039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.058 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:24.058 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.058 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.058 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.058 ************************************ 00:08:24.058 START TEST lvs_grow_clean 00:08:24.058 ************************************ 00:08:24.058 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.059 13:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.318 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.318 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.576 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:24.576 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:24.576 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.143 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.143 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.143 13:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f lvol 150 00:08:25.413 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7afc200-95e6-41f6-a927-4baf740b7a45 00:08:25.413 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:25.413 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.702 [2024-11-20 13:27:37.455244] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.702 [2024-11-20 13:27:37.455368] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.702 true 00:08:25.702 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:25.702 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:25.960 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.960 13:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.218 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7afc200-95e6-41f6-a927-4baf740b7a45 00:08:26.476 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:26.735 [2024-11-20 13:27:38.643883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:26.735 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63435 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63435 /var/tmp/bdevperf.sock 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63435 ']' 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.301 13:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.301 [2024-11-20 13:27:39.010349] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:27.301 [2024-11-20 13:27:39.010510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63435 ] 00:08:27.301 [2024-11-20 13:27:39.169278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.301 [2024-11-20 13:27:39.239485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.558 [2024-11-20 13:27:39.298599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.558 13:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.558 13:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:27.558 13:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.816 Nvme0n1 00:08:27.816 13:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.379 [ 00:08:28.379 { 00:08:28.379 "name": "Nvme0n1", 00:08:28.379 "aliases": [ 00:08:28.379 "c7afc200-95e6-41f6-a927-4baf740b7a45" 00:08:28.379 ], 00:08:28.379 "product_name": "NVMe disk", 00:08:28.379 "block_size": 4096, 00:08:28.379 "num_blocks": 38912, 00:08:28.379 "uuid": "c7afc200-95e6-41f6-a927-4baf740b7a45", 00:08:28.379 "numa_id": -1, 00:08:28.379 "assigned_rate_limits": { 00:08:28.379 "rw_ios_per_sec": 0, 00:08:28.379 "rw_mbytes_per_sec": 0, 00:08:28.379 "r_mbytes_per_sec": 0, 00:08:28.379 "w_mbytes_per_sec": 0 00:08:28.379 }, 00:08:28.379 "claimed": false, 00:08:28.379 "zoned": false, 00:08:28.379 "supported_io_types": { 00:08:28.379 "read": true, 00:08:28.379 "write": true, 00:08:28.379 "unmap": true, 00:08:28.379 "flush": true, 00:08:28.379 "reset": true, 00:08:28.379 "nvme_admin": true, 00:08:28.379 "nvme_io": true, 00:08:28.379 "nvme_io_md": false, 00:08:28.379 "write_zeroes": true, 00:08:28.379 "zcopy": false, 00:08:28.379 "get_zone_info": false, 00:08:28.379 "zone_management": false, 00:08:28.379 "zone_append": false, 00:08:28.379 "compare": true, 00:08:28.379 "compare_and_write": true, 00:08:28.379 "abort": true, 00:08:28.379 "seek_hole": false, 00:08:28.379 "seek_data": false, 00:08:28.379 "copy": true, 00:08:28.379 "nvme_iov_md": false 00:08:28.379 }, 00:08:28.379 "memory_domains": [ 00:08:28.379 { 00:08:28.379 "dma_device_id": "system", 00:08:28.379 "dma_device_type": 1 00:08:28.379 } 00:08:28.379 ], 00:08:28.379 "driver_specific": { 00:08:28.379 "nvme": [ 00:08:28.379 { 00:08:28.379 "trid": { 00:08:28.379 "trtype": "TCP", 00:08:28.379 "adrfam": "IPv4", 00:08:28.379 "traddr": "10.0.0.3", 00:08:28.379 "trsvcid": "4420", 00:08:28.379 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.379 }, 00:08:28.379 "ctrlr_data": { 00:08:28.379 "cntlid": 1, 00:08:28.379 "vendor_id": "0x8086", 00:08:28.379 "model_number": "SPDK bdev Controller", 00:08:28.379 "serial_number": "SPDK0", 00:08:28.379 "firmware_revision": "25.01", 00:08:28.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.379 "oacs": { 00:08:28.379 "security": 0, 00:08:28.379 "format": 0, 00:08:28.379 "firmware": 0, 00:08:28.379 "ns_manage": 0 00:08:28.379 }, 00:08:28.379 "multi_ctrlr": true, 00:08:28.379 "ana_reporting": false 00:08:28.379 }, 00:08:28.379 "vs": { 00:08:28.379 "nvme_version": "1.3" 00:08:28.379 }, 00:08:28.379 "ns_data": { 00:08:28.379 "id": 1, 00:08:28.379 "can_share": true 00:08:28.379 } 00:08:28.379 } 00:08:28.379 ], 00:08:28.379 "mp_policy": "active_passive" 00:08:28.379 } 00:08:28.379 } 00:08:28.379 ] 00:08:28.379 13:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63451 00:08:28.379 13:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.379 13:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:28.379 Running I/O for 10 seconds... 00:08:29.312 Latency(us) 00:08:29.312 [2024-11-20T13:27:41.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.312 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:29.312 [2024-11-20T13:27:41.269Z] =================================================================================================================== 00:08:29.312 [2024-11-20T13:27:41.269Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:29.312 00:08:30.287 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:30.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.545 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:30.545 [2024-11-20T13:27:42.502Z] =================================================================================================================== 00:08:30.545 [2024-11-20T13:27:42.502Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:30.545 00:08:30.545 true 00:08:30.545 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:30.545 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.110 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.110 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.110 13:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63451 00:08:31.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.367 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:31.367 [2024-11-20T13:27:43.324Z] =================================================================================================================== 00:08:31.367 [2024-11-20T13:27:43.324Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:31.367 00:08:32.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.299 Nvme0n1 : 4.00 6502.50 25.40 0.00 0.00 0.00 0.00 0.00 00:08:32.299 [2024-11-20T13:27:44.256Z] =================================================================================================================== 00:08:32.299 [2024-11-20T13:27:44.256Z] Total : 6502.50 25.40 0.00 0.00 0.00 0.00 0.00 00:08:32.299 00:08:33.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.671 Nvme0n1 : 5.00 6491.40 25.36 0.00 0.00 0.00 0.00 0.00 00:08:33.671 [2024-11-20T13:27:45.628Z] =================================================================================================================== 00:08:33.671 [2024-11-20T13:27:45.628Z] Total : 6491.40 25.36 0.00 0.00 0.00 0.00 0.00 00:08:33.671 00:08:34.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.301 Nvme0n1 : 6.00 6467.83 25.26 0.00 0.00 0.00 0.00 0.00 00:08:34.301 [2024-11-20T13:27:46.258Z] =================================================================================================================== 00:08:34.301 [2024-11-20T13:27:46.258Z] Total : 6467.83 25.26 0.00 0.00 0.00 0.00 0.00 00:08:34.301 00:08:35.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.682 Nvme0n1 : 7.00 6451.00 25.20 0.00 0.00 0.00 0.00 0.00 00:08:35.682 [2024-11-20T13:27:47.639Z] =================================================================================================================== 00:08:35.682 [2024-11-20T13:27:47.639Z] Total : 6451.00 25.20 0.00 0.00 0.00 0.00 0.00 00:08:35.682 00:08:36.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.616 Nvme0n1 : 8.00 6438.38 25.15 0.00 0.00 0.00 0.00 0.00 00:08:36.616 [2024-11-20T13:27:48.573Z] =================================================================================================================== 00:08:36.616 [2024-11-20T13:27:48.573Z] Total : 6438.38 25.15 0.00 0.00 0.00 0.00 0.00 00:08:36.616 00:08:37.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.550 Nvme0n1 : 9.00 6386.22 24.95 0.00 0.00 0.00 0.00 0.00 00:08:37.550 [2024-11-20T13:27:49.507Z] =================================================================================================================== 00:08:37.550 [2024-11-20T13:27:49.507Z] Total : 6386.22 24.95 0.00 0.00 0.00 0.00 0.00 00:08:37.550 00:08:38.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.484 Nvme0n1 : 10.00 6369.90 24.88 0.00 0.00 0.00 0.00 0.00 00:08:38.484 [2024-11-20T13:27:50.441Z] =================================================================================================================== 00:08:38.484 [2024-11-20T13:27:50.441Z] Total : 6369.90 24.88 0.00 0.00 0.00 0.00 0.00 00:08:38.484 00:08:38.484 00:08:38.484 Latency(us) 00:08:38.484 [2024-11-20T13:27:50.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.484 Nvme0n1 : 10.00 6366.93 24.87 0.00 0.00 20092.08 14715.81 55765.18 00:08:38.484 [2024-11-20T13:27:50.441Z] =================================================================================================================== 00:08:38.484 [2024-11-20T13:27:50.441Z] Total : 6366.93 24.87 0.00 0.00 20092.08 14715.81 55765.18 00:08:38.484 { 00:08:38.484 "results": [ 00:08:38.484 { 00:08:38.484 "job": "Nvme0n1", 00:08:38.484 "core_mask": "0x2", 00:08:38.484 "workload": "randwrite", 00:08:38.484 "status": "finished", 00:08:38.484 "queue_depth": 128, 00:08:38.484 "io_size": 4096, 00:08:38.484 "runtime": 10.004824, 00:08:38.484 "iops": 6366.928593646425, 00:08:38.484 "mibps": 24.870814818931347, 00:08:38.484 "io_failed": 0, 00:08:38.484 "io_timeout": 0, 00:08:38.484 "avg_latency_us": 20092.08056583417, 00:08:38.484 "min_latency_us": 14715.81090909091, 00:08:38.484 "max_latency_us": 55765.178181818184 00:08:38.484 } 00:08:38.484 ], 00:08:38.484 "core_count": 1 00:08:38.484 } 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63435 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63435 ']' 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63435 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63435 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:38.484 killing process with pid 63435 00:08:38.484 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.484 00:08:38.484 Latency(us) 00:08:38.484 [2024-11-20T13:27:50.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.484 [2024-11-20T13:27:50.441Z] =================================================================================================================== 00:08:38.484 [2024-11-20T13:27:50.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63435' 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63435 00:08:38.484 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63435 00:08:38.742 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:39.000 13:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.258 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:39.258 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:39.517 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:39.517 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:39.517 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.776 [2024-11-20 13:27:51.693761] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.776 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:40.034 13:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:40.293 request: 00:08:40.293 { 00:08:40.293 "uuid": "32de2da3-bd6b-4395-89a8-fb919c0ad59f", 00:08:40.293 "method": "bdev_lvol_get_lvstores", 00:08:40.293 "req_id": 1 00:08:40.293 } 00:08:40.293 Got JSON-RPC error response 00:08:40.293 response: 00:08:40.293 { 00:08:40.293 "code": -19, 00:08:40.293 "message": "No such device" 00:08:40.293 } 00:08:40.293 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:40.293 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.293 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.293 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.293 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.551 aio_bdev 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7afc200-95e6-41f6-a927-4baf740b7a45 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c7afc200-95e6-41f6-a927-4baf740b7a45 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.551 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.810 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7afc200-95e6-41f6-a927-4baf740b7a45 -t 2000 00:08:41.068 [ 00:08:41.068 { 00:08:41.068 "name": "c7afc200-95e6-41f6-a927-4baf740b7a45", 00:08:41.068 "aliases": [ 00:08:41.068 "lvs/lvol" 00:08:41.068 ], 00:08:41.068 "product_name": "Logical Volume", 00:08:41.068 "block_size": 4096, 00:08:41.068 "num_blocks": 38912, 00:08:41.068 "uuid": "c7afc200-95e6-41f6-a927-4baf740b7a45", 00:08:41.068 "assigned_rate_limits": { 00:08:41.068 "rw_ios_per_sec": 0, 00:08:41.068 "rw_mbytes_per_sec": 0, 00:08:41.068 "r_mbytes_per_sec": 0, 00:08:41.068 "w_mbytes_per_sec": 0 00:08:41.068 }, 00:08:41.068 "claimed": false, 00:08:41.068 "zoned": false, 00:08:41.068 "supported_io_types": { 00:08:41.068 "read": true, 00:08:41.068 "write": true, 00:08:41.068 "unmap": true, 00:08:41.068 "flush": false, 00:08:41.068 "reset": true, 00:08:41.068 "nvme_admin": false, 00:08:41.068 "nvme_io": false, 00:08:41.068 "nvme_io_md": false, 00:08:41.068 "write_zeroes": true, 00:08:41.068 "zcopy": false, 00:08:41.068 "get_zone_info": false, 00:08:41.068 "zone_management": false, 00:08:41.068 "zone_append": false, 00:08:41.068 "compare": false, 00:08:41.068 "compare_and_write": false, 00:08:41.068 "abort": false, 00:08:41.068 "seek_hole": true, 00:08:41.068 "seek_data": true, 00:08:41.068 "copy": false, 00:08:41.068 "nvme_iov_md": false 00:08:41.068 }, 00:08:41.068 "driver_specific": { 00:08:41.068 "lvol": { 00:08:41.068 "lvol_store_uuid": "32de2da3-bd6b-4395-89a8-fb919c0ad59f", 00:08:41.068 "base_bdev": "aio_bdev", 00:08:41.068 "thin_provision": false, 00:08:41.068 "num_allocated_clusters": 38, 00:08:41.068 "snapshot": false, 00:08:41.068 "clone": false, 00:08:41.068 "esnap_clone": false 00:08:41.068 } 00:08:41.068 } 00:08:41.068 } 00:08:41.068 ] 00:08:41.068 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:41.068 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.068 13:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:41.326 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.326 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:41.326 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.619 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.619 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c7afc200-95e6-41f6-a927-4baf740b7a45 00:08:41.876 13:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32de2da3-bd6b-4395-89a8-fb919c0ad59f 00:08:42.134 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.392 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:42.958 ************************************ 00:08:42.958 END TEST lvs_grow_clean 00:08:42.958 ************************************ 00:08:42.958 00:08:42.958 real 0m18.919s 00:08:42.958 user 0m17.843s 00:08:42.958 sys 0m2.705s 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.958 ************************************ 00:08:42.958 START TEST lvs_grow_dirty 00:08:42.958 ************************************ 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:42.958 13:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.216 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.216 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:43.475 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:43.475 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:43.475 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.041 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.041 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.041 13:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 lvol 150 00:08:44.299 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=60f84338-bc4e-4730-89eb-2d63cac8d23f 00:08:44.299 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:44.299 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.557 [2024-11-20 13:27:56.285157] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.557 [2024-11-20 13:27:56.285264] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.557 true 00:08:44.557 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:44.557 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:44.814 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:44.814 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.073 13:27:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60f84338-bc4e-4730-89eb-2d63cac8d23f 00:08:45.363 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:45.621 [2024-11-20 13:27:57.509775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.621 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:45.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63706 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63706 /var/tmp/bdevperf.sock 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63706 ']' 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.879 13:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.138 [2024-11-20 13:27:57.864959] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:46.138 [2024-11-20 13:27:57.865074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63706 ] 00:08:46.138 [2024-11-20 13:27:58.017668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.138 [2024-11-20 13:27:58.088992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.396 [2024-11-20 13:27:58.146602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.963 13:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.963 13:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:46.963 13:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:47.528 Nvme0n1 00:08:47.528 13:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.786 [ 00:08:47.786 { 00:08:47.786 "name": "Nvme0n1", 00:08:47.786 "aliases": [ 00:08:47.786 "60f84338-bc4e-4730-89eb-2d63cac8d23f" 00:08:47.786 ], 00:08:47.786 "product_name": "NVMe disk", 00:08:47.786 "block_size": 4096, 00:08:47.786 "num_blocks": 38912, 00:08:47.786 "uuid": "60f84338-bc4e-4730-89eb-2d63cac8d23f", 00:08:47.786 "numa_id": -1, 00:08:47.786 "assigned_rate_limits": { 00:08:47.786 "rw_ios_per_sec": 0, 00:08:47.786 "rw_mbytes_per_sec": 0, 00:08:47.786 "r_mbytes_per_sec": 0, 00:08:47.786 "w_mbytes_per_sec": 0 00:08:47.786 }, 00:08:47.786 "claimed": false, 00:08:47.786 "zoned": false, 00:08:47.786 "supported_io_types": { 00:08:47.786 "read": true, 00:08:47.786 "write": true, 00:08:47.786 "unmap": true, 00:08:47.786 "flush": true, 00:08:47.786 "reset": true, 00:08:47.786 "nvme_admin": true, 00:08:47.786 "nvme_io": true, 00:08:47.786 "nvme_io_md": false, 00:08:47.786 "write_zeroes": true, 00:08:47.786 "zcopy": false, 00:08:47.786 "get_zone_info": false, 00:08:47.786 "zone_management": false, 00:08:47.786 "zone_append": false, 00:08:47.786 "compare": true, 00:08:47.786 "compare_and_write": true, 00:08:47.786 "abort": true, 00:08:47.786 "seek_hole": false, 00:08:47.786 "seek_data": false, 00:08:47.786 "copy": true, 00:08:47.786 "nvme_iov_md": false 00:08:47.786 }, 00:08:47.786 "memory_domains": [ 00:08:47.786 { 00:08:47.786 "dma_device_id": "system", 00:08:47.786 "dma_device_type": 1 00:08:47.786 } 00:08:47.786 ], 00:08:47.786 "driver_specific": { 00:08:47.786 "nvme": [ 00:08:47.786 { 00:08:47.786 "trid": { 00:08:47.786 "trtype": "TCP", 00:08:47.787 "adrfam": "IPv4", 00:08:47.787 "traddr": "10.0.0.3", 00:08:47.787 "trsvcid": "4420", 00:08:47.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.787 }, 00:08:47.787 "ctrlr_data": { 00:08:47.787 "cntlid": 1, 00:08:47.787 "vendor_id": "0x8086", 00:08:47.787 "model_number": "SPDK bdev Controller", 00:08:47.787 "serial_number": "SPDK0", 00:08:47.787 "firmware_revision": "25.01", 00:08:47.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.787 "oacs": { 00:08:47.787 "security": 0, 00:08:47.787 "format": 0, 00:08:47.787 "firmware": 0, 00:08:47.787 "ns_manage": 0 00:08:47.787 }, 00:08:47.787 "multi_ctrlr": true, 00:08:47.787 "ana_reporting": false 00:08:47.787 }, 00:08:47.787 "vs": { 00:08:47.787 "nvme_version": "1.3" 00:08:47.787 }, 00:08:47.787 "ns_data": { 00:08:47.787 "id": 1, 00:08:47.787 "can_share": true 00:08:47.787 } 00:08:47.787 } 00:08:47.787 ], 00:08:47.787 "mp_policy": "active_passive" 00:08:47.787 } 00:08:47.787 } 00:08:47.787 ] 00:08:47.787 13:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63735 00:08:47.787 13:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.787 13:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.045 Running I/O for 10 seconds... 00:08:49.049 Latency(us) 00:08:49.049 [2024-11-20T13:28:01.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.049 Nvme0n1 : 1.00 6654.00 25.99 0.00 0.00 0.00 0.00 0.00 00:08:49.049 [2024-11-20T13:28:01.006Z] =================================================================================================================== 00:08:49.049 [2024-11-20T13:28:01.006Z] Total : 6654.00 25.99 0.00 0.00 0.00 0.00 0.00 00:08:49.049 00:08:49.982 13:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:49.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.982 Nvme0n1 : 2.00 6692.50 26.14 0.00 0.00 0.00 0.00 0.00 00:08:49.982 [2024-11-20T13:28:01.939Z] =================================================================================================================== 00:08:49.982 [2024-11-20T13:28:01.939Z] Total : 6692.50 26.14 0.00 0.00 0.00 0.00 0.00 00:08:49.982 00:08:50.240 true 00:08:50.240 13:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:50.240 13:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:50.498 13:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:50.498 13:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:50.498 13:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63735 00:08:51.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.064 Nvme0n1 : 3.00 6620.67 25.86 0.00 0.00 0.00 0.00 0.00 00:08:51.064 [2024-11-20T13:28:03.021Z] =================================================================================================================== 00:08:51.064 [2024-11-20T13:28:03.021Z] Total : 6620.67 25.86 0.00 0.00 0.00 0.00 0.00 00:08:51.064 00:08:51.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.995 Nvme0n1 : 4.00 6582.25 25.71 0.00 0.00 0.00 0.00 0.00 00:08:51.995 [2024-11-20T13:28:03.952Z] =================================================================================================================== 00:08:51.995 [2024-11-20T13:28:03.952Z] Total : 6582.25 25.71 0.00 0.00 0.00 0.00 0.00 00:08:51.995 00:08:52.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.974 Nvme0n1 : 5.00 6579.20 25.70 0.00 0.00 0.00 0.00 0.00 00:08:52.974 [2024-11-20T13:28:04.931Z] =================================================================================================================== 00:08:52.974 [2024-11-20T13:28:04.931Z] Total : 6579.20 25.70 0.00 0.00 0.00 0.00 0.00 00:08:52.974 00:08:53.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.912 Nvme0n1 : 6.00 6305.50 24.63 0.00 0.00 0.00 0.00 0.00 00:08:53.912 [2024-11-20T13:28:05.869Z] =================================================================================================================== 00:08:53.912 [2024-11-20T13:28:05.869Z] Total : 6305.50 24.63 0.00 0.00 0.00 0.00 0.00 00:08:53.912 00:08:54.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.846 Nvme0n1 : 7.00 6293.71 24.58 0.00 0.00 0.00 0.00 0.00 00:08:54.846 [2024-11-20T13:28:06.803Z] =================================================================================================================== 00:08:54.846 [2024-11-20T13:28:06.803Z] Total : 6293.71 24.58 0.00 0.00 0.00 0.00 0.00 00:08:54.846 00:08:56.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.220 Nvme0n1 : 8.00 6253.12 24.43 0.00 0.00 0.00 0.00 0.00 00:08:56.220 [2024-11-20T13:28:08.177Z] =================================================================================================================== 00:08:56.220 [2024-11-20T13:28:08.177Z] Total : 6253.12 24.43 0.00 0.00 0.00 0.00 0.00 00:08:56.220 00:08:57.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.154 Nvme0n1 : 9.00 6221.56 24.30 0.00 0.00 0.00 0.00 0.00 00:08:57.154 [2024-11-20T13:28:09.111Z] =================================================================================================================== 00:08:57.154 [2024-11-20T13:28:09.111Z] Total : 6221.56 24.30 0.00 0.00 0.00 0.00 0.00 00:08:57.154 00:08:58.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.088 Nvme0n1 : 10.00 6196.30 24.20 0.00 0.00 0.00 0.00 0.00 00:08:58.088 [2024-11-20T13:28:10.045Z] =================================================================================================================== 00:08:58.088 [2024-11-20T13:28:10.045Z] Total : 6196.30 24.20 0.00 0.00 0.00 0.00 0.00 00:08:58.088 00:08:58.088 00:08:58.088 Latency(us) 00:08:58.088 [2024-11-20T13:28:10.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.088 Nvme0n1 : 10.01 6200.11 24.22 0.00 0.00 20637.03 14417.92 241172.48 00:08:58.088 [2024-11-20T13:28:10.045Z] =================================================================================================================== 00:08:58.088 [2024-11-20T13:28:10.045Z] Total : 6200.11 24.22 0.00 0.00 20637.03 14417.92 241172.48 00:08:58.088 { 00:08:58.088 "results": [ 00:08:58.088 { 00:08:58.088 "job": "Nvme0n1", 00:08:58.088 "core_mask": "0x2", 00:08:58.088 "workload": "randwrite", 00:08:58.088 "status": "finished", 00:08:58.088 "queue_depth": 128, 00:08:58.088 "io_size": 4096, 00:08:58.088 "runtime": 10.014492, 00:08:58.088 "iops": 6200.114793641055, 00:08:58.088 "mibps": 24.219198412660372, 00:08:58.088 "io_failed": 0, 00:08:58.088 "io_timeout": 0, 00:08:58.088 "avg_latency_us": 20637.02877577046, 00:08:58.088 "min_latency_us": 14417.92, 00:08:58.088 "max_latency_us": 241172.48 00:08:58.088 } 00:08:58.088 ], 00:08:58.088 "core_count": 1 00:08:58.088 } 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63706 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63706 ']' 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63706 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63706 00:08:58.088 killing process with pid 63706 00:08:58.088 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.088 00:08:58.088 Latency(us) 00:08:58.088 [2024-11-20T13:28:10.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.088 [2024-11-20T13:28:10.045Z] =================================================================================================================== 00:08:58.088 [2024-11-20T13:28:10.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63706' 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63706 00:08:58.088 13:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63706 00:08:58.346 13:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:58.604 13:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.862 13:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:08:58.862 13:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63338 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63338 00:08:59.428 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63338 Killed "${NVMF_APP[@]}" "$@" 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63873 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63873 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63873 ']' 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.428 13:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.428 [2024-11-20 13:28:11.249111] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:59.428 [2024-11-20 13:28:11.249517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.686 [2024-11-20 13:28:11.392407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.686 [2024-11-20 13:28:11.456293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.686 [2024-11-20 13:28:11.456591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.686 [2024-11-20 13:28:11.456716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.686 [2024-11-20 13:28:11.456730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.686 [2024-11-20 13:28:11.456738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.686 [2024-11-20 13:28:11.457163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.686 [2024-11-20 13:28:11.511429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.619 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.619 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:00.619 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.620 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.620 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.620 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.620 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.878 [2024-11-20 13:28:12.639918] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:00.878 [2024-11-20 13:28:12.640563] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:00.878 [2024-11-20 13:28:12.640732] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 60f84338-bc4e-4730-89eb-2d63cac8d23f 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=60f84338-bc4e-4730-89eb-2d63cac8d23f 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.878 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.136 13:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60f84338-bc4e-4730-89eb-2d63cac8d23f -t 2000 00:09:01.394 [ 00:09:01.394 { 00:09:01.394 "name": "60f84338-bc4e-4730-89eb-2d63cac8d23f", 00:09:01.394 "aliases": [ 00:09:01.394 "lvs/lvol" 00:09:01.394 ], 00:09:01.394 "product_name": "Logical Volume", 00:09:01.394 "block_size": 4096, 00:09:01.394 "num_blocks": 38912, 00:09:01.394 "uuid": "60f84338-bc4e-4730-89eb-2d63cac8d23f", 00:09:01.394 "assigned_rate_limits": { 00:09:01.394 "rw_ios_per_sec": 0, 00:09:01.394 "rw_mbytes_per_sec": 0, 00:09:01.394 "r_mbytes_per_sec": 0, 00:09:01.394 "w_mbytes_per_sec": 0 00:09:01.394 }, 00:09:01.394 "claimed": false, 00:09:01.394 "zoned": false, 00:09:01.394 "supported_io_types": { 00:09:01.394 "read": true, 00:09:01.394 "write": true, 00:09:01.394 "unmap": true, 00:09:01.394 "flush": false, 00:09:01.394 "reset": true, 00:09:01.394 "nvme_admin": false, 00:09:01.394 "nvme_io": false, 00:09:01.394 "nvme_io_md": false, 00:09:01.394 "write_zeroes": true, 00:09:01.394 "zcopy": false, 00:09:01.394 "get_zone_info": false, 00:09:01.394 "zone_management": false, 00:09:01.394 "zone_append": false, 00:09:01.394 "compare": false, 00:09:01.394 "compare_and_write": false, 00:09:01.394 "abort": false, 00:09:01.394 "seek_hole": true, 00:09:01.394 "seek_data": true, 00:09:01.394 "copy": false, 00:09:01.394 "nvme_iov_md": false 00:09:01.394 }, 00:09:01.394 "driver_specific": { 00:09:01.394 "lvol": { 00:09:01.394 "lvol_store_uuid": "7f473cb6-2198-4df2-a1e0-73f7721b87a4", 00:09:01.394 "base_bdev": "aio_bdev", 00:09:01.394 "thin_provision": false, 00:09:01.394 "num_allocated_clusters": 38, 00:09:01.394 "snapshot": false, 00:09:01.394 "clone": false, 00:09:01.394 "esnap_clone": false 00:09:01.394 } 00:09:01.394 } 00:09:01.394 } 00:09:01.394 ] 00:09:01.394 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:01.394 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:01.394 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:01.652 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:01.910 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:01.910 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:02.167 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:02.167 13:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.425 [2024-11-20 13:28:14.237427] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:02.425 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:02.684 request: 00:09:02.684 { 00:09:02.684 "uuid": "7f473cb6-2198-4df2-a1e0-73f7721b87a4", 00:09:02.684 "method": "bdev_lvol_get_lvstores", 00:09:02.684 "req_id": 1 00:09:02.684 } 00:09:02.684 Got JSON-RPC error response 00:09:02.684 response: 00:09:02.684 { 00:09:02.684 "code": -19, 00:09:02.684 "message": "No such device" 00:09:02.684 } 00:09:02.684 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:02.684 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.684 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:02.684 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.684 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.250 aio_bdev 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60f84338-bc4e-4730-89eb-2d63cac8d23f 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=60f84338-bc4e-4730-89eb-2d63cac8d23f 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.250 13:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:03.508 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60f84338-bc4e-4730-89eb-2d63cac8d23f -t 2000 00:09:03.766 [ 00:09:03.766 { 00:09:03.766 "name": "60f84338-bc4e-4730-89eb-2d63cac8d23f", 00:09:03.766 "aliases": [ 00:09:03.766 "lvs/lvol" 00:09:03.766 ], 00:09:03.766 "product_name": "Logical Volume", 00:09:03.766 "block_size": 4096, 00:09:03.766 "num_blocks": 38912, 00:09:03.766 "uuid": "60f84338-bc4e-4730-89eb-2d63cac8d23f", 00:09:03.766 "assigned_rate_limits": { 00:09:03.766 "rw_ios_per_sec": 0, 00:09:03.766 "rw_mbytes_per_sec": 0, 00:09:03.766 "r_mbytes_per_sec": 0, 00:09:03.766 "w_mbytes_per_sec": 0 00:09:03.766 }, 00:09:03.766 "claimed": false, 00:09:03.766 "zoned": false, 00:09:03.766 "supported_io_types": { 00:09:03.766 "read": true, 00:09:03.766 "write": true, 00:09:03.766 "unmap": true, 00:09:03.766 "flush": false, 00:09:03.766 "reset": true, 00:09:03.766 "nvme_admin": false, 00:09:03.766 "nvme_io": false, 00:09:03.766 "nvme_io_md": false, 00:09:03.766 "write_zeroes": true, 00:09:03.766 "zcopy": false, 00:09:03.766 "get_zone_info": false, 00:09:03.766 "zone_management": false, 00:09:03.766 "zone_append": false, 00:09:03.766 "compare": false, 00:09:03.766 "compare_and_write": false, 00:09:03.766 "abort": false, 00:09:03.766 "seek_hole": true, 00:09:03.766 "seek_data": true, 00:09:03.766 "copy": false, 00:09:03.766 "nvme_iov_md": false 00:09:03.766 }, 00:09:03.766 "driver_specific": { 00:09:03.766 "lvol": { 00:09:03.766 "lvol_store_uuid": "7f473cb6-2198-4df2-a1e0-73f7721b87a4", 00:09:03.766 "base_bdev": "aio_bdev", 00:09:03.766 "thin_provision": false, 00:09:03.766 "num_allocated_clusters": 38, 00:09:03.766 "snapshot": false, 00:09:03.766 "clone": false, 00:09:03.766 "esnap_clone": false 00:09:03.766 } 00:09:03.766 } 00:09:03.766 } 00:09:03.766 ] 00:09:03.766 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:03.766 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:03.766 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.024 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.024 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:04.024 13:28:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.282 13:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.282 13:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 60f84338-bc4e-4730-89eb-2d63cac8d23f 00:09:04.540 13:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f473cb6-2198-4df2-a1e0-73f7721b87a4 00:09:05.107 13:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.365 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:05.623 ************************************ 00:09:05.623 END TEST lvs_grow_dirty 00:09:05.623 ************************************ 00:09:05.623 00:09:05.623 real 0m22.779s 00:09:05.623 user 0m47.164s 00:09:05.623 sys 0m8.299s 00:09:05.623 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.623 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:05.881 nvmf_trace.0 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.881 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:06.139 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.139 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:06.139 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.139 13:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.139 rmmod nvme_tcp 00:09:06.139 rmmod nvme_fabrics 00:09:06.139 rmmod nvme_keyring 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63873 ']' 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63873 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63873 ']' 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63873 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.139 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63873 00:09:06.397 killing process with pid 63873 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63873' 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63873 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63873 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:06.397 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:06.656 00:09:06.656 real 0m44.801s 00:09:06.656 user 1m12.799s 00:09:06.656 sys 0m12.093s 00:09:06.656 ************************************ 00:09:06.656 END TEST nvmf_lvs_grow 00:09:06.656 ************************************ 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.656 ************************************ 00:09:06.656 START TEST nvmf_bdev_io_wait 00:09:06.656 ************************************ 00:09:06.656 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:06.915 * Looking for test storage... 00:09:06.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.915 --rc genhtml_branch_coverage=1 00:09:06.915 --rc genhtml_function_coverage=1 00:09:06.915 --rc genhtml_legend=1 00:09:06.915 --rc geninfo_all_blocks=1 00:09:06.915 --rc geninfo_unexecuted_blocks=1 00:09:06.915 00:09:06.915 ' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.915 --rc genhtml_branch_coverage=1 00:09:06.915 --rc genhtml_function_coverage=1 00:09:06.915 --rc genhtml_legend=1 00:09:06.915 --rc geninfo_all_blocks=1 00:09:06.915 --rc geninfo_unexecuted_blocks=1 00:09:06.915 00:09:06.915 ' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.915 --rc genhtml_branch_coverage=1 00:09:06.915 --rc genhtml_function_coverage=1 00:09:06.915 --rc genhtml_legend=1 00:09:06.915 --rc geninfo_all_blocks=1 00:09:06.915 --rc geninfo_unexecuted_blocks=1 00:09:06.915 00:09:06.915 ' 00:09:06.915 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.916 --rc genhtml_branch_coverage=1 00:09:06.916 --rc genhtml_function_coverage=1 00:09:06.916 --rc genhtml_legend=1 00:09:06.916 --rc geninfo_all_blocks=1 00:09:06.916 --rc geninfo_unexecuted_blocks=1 00:09:06.916 00:09:06.916 ' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.916 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:06.916 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:06.917 Cannot find device "nvmf_init_br" 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:06.917 Cannot find device "nvmf_init_br2" 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:06.917 Cannot find device "nvmf_tgt_br" 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.917 Cannot find device "nvmf_tgt_br2" 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:06.917 Cannot find device "nvmf_init_br" 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:06.917 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:06.917 Cannot find device "nvmf_init_br2" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:07.175 Cannot find device "nvmf_tgt_br" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:07.175 Cannot find device "nvmf_tgt_br2" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:07.175 Cannot find device "nvmf_br" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:07.175 Cannot find device "nvmf_init_if" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:07.175 Cannot find device "nvmf_init_if2" 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:07.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:07.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:07.175 13:28:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:07.175 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:07.176 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:07.176 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:07.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:07.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:09:07.432 00:09:07.432 --- 10.0.0.3 ping statistics --- 00:09:07.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.432 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:07.432 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:07.432 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:09:07.432 00:09:07.432 --- 10.0.0.4 ping statistics --- 00:09:07.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.432 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:07.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:09:07.432 00:09:07.432 --- 10.0.0.1 ping statistics --- 00:09:07.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.432 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:07.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:07.432 00:09:07.432 --- 10.0.0.2 ping statistics --- 00:09:07.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.432 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64261 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64261 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64261 ']' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.432 13:28:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 [2024-11-20 13:28:19.270590] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:07.432 [2024-11-20 13:28:19.271508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.738 [2024-11-20 13:28:19.416976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.738 [2024-11-20 13:28:19.509602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.738 [2024-11-20 13:28:19.509928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.738 [2024-11-20 13:28:19.510128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.738 [2024-11-20 13:28:19.510287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.738 [2024-11-20 13:28:19.510315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.738 [2024-11-20 13:28:19.512002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.738 [2024-11-20 13:28:19.514228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.738 [2024-11-20 13:28:19.514326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.738 [2024-11-20 13:28:19.514345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 [2024-11-20 13:28:20.550473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 [2024-11-20 13:28:20.568590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 Malloc0 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.687 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.688 [2024-11-20 13:28:20.639064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64306 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64308 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64310 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.946 { 00:09:08.946 "params": { 00:09:08.946 "name": "Nvme$subsystem", 00:09:08.946 "trtype": "$TEST_TRANSPORT", 00:09:08.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.946 "adrfam": "ipv4", 00:09:08.946 "trsvcid": "$NVMF_PORT", 00:09:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.946 "hdgst": ${hdgst:-false}, 00:09:08.946 "ddgst": ${ddgst:-false} 00:09:08.946 }, 00:09:08.946 "method": "bdev_nvme_attach_controller" 00:09:08.946 } 00:09:08.946 EOF 00:09:08.946 )") 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64311 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.946 { 00:09:08.946 "params": { 00:09:08.946 "name": "Nvme$subsystem", 00:09:08.946 "trtype": "$TEST_TRANSPORT", 00:09:08.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.946 "adrfam": "ipv4", 00:09:08.946 "trsvcid": "$NVMF_PORT", 00:09:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.946 "hdgst": ${hdgst:-false}, 00:09:08.946 "ddgst": ${ddgst:-false} 00:09:08.946 }, 00:09:08.946 "method": "bdev_nvme_attach_controller" 00:09:08.946 } 00:09:08.946 EOF 00:09:08.946 )") 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:08.946 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.947 { 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme$subsystem", 00:09:08.947 "trtype": "$TEST_TRANSPORT", 00:09:08.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "$NVMF_PORT", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.947 "hdgst": ${hdgst:-false}, 00:09:08.947 "ddgst": ${ddgst:-false} 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 } 00:09:08.947 EOF 00:09:08.947 )") 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme1", 00:09:08.947 "trtype": "tcp", 00:09:08.947 "traddr": "10.0.0.3", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "4420", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.947 "hdgst": false, 00:09:08.947 "ddgst": false 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 }' 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.947 { 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme$subsystem", 00:09:08.947 "trtype": "$TEST_TRANSPORT", 00:09:08.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "$NVMF_PORT", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.947 "hdgst": ${hdgst:-false}, 00:09:08.947 "ddgst": ${ddgst:-false} 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 } 00:09:08.947 EOF 00:09:08.947 )") 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme1", 00:09:08.947 "trtype": "tcp", 00:09:08.947 "traddr": "10.0.0.3", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "4420", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.947 "hdgst": false, 00:09:08.947 "ddgst": false 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 }' 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme1", 00:09:08.947 "trtype": "tcp", 00:09:08.947 "traddr": "10.0.0.3", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "4420", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.947 "hdgst": false, 00:09:08.947 "ddgst": false 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 }' 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.947 "params": { 00:09:08.947 "name": "Nvme1", 00:09:08.947 "trtype": "tcp", 00:09:08.947 "traddr": "10.0.0.3", 00:09:08.947 "adrfam": "ipv4", 00:09:08.947 "trsvcid": "4420", 00:09:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.947 "hdgst": false, 00:09:08.947 "ddgst": false 00:09:08.947 }, 00:09:08.947 "method": "bdev_nvme_attach_controller" 00:09:08.947 }' 00:09:08.947 [2024-11-20 13:28:20.701093] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:08.947 [2024-11-20 13:28:20.701255] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:08.947 [2024-11-20 13:28:20.710817] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:08.947 [2024-11-20 13:28:20.711144] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:08.947 13:28:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64306 00:09:08.947 [2024-11-20 13:28:20.749705] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:08.947 [2024-11-20 13:28:20.750152] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:08.947 [2024-11-20 13:28:20.777606] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:08.947 [2024-11-20 13:28:20.778110] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:09.207 [2024-11-20 13:28:20.917690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.207 [2024-11-20 13:28:20.986394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.207 [2024-11-20 13:28:20.993012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.207 [2024-11-20 13:28:20.999835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.207 [2024-11-20 13:28:21.053205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.207 [2024-11-20 13:28:21.067235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.207 [2024-11-20 13:28:21.107279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.466 [2024-11-20 13:28:21.178021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.466 [2024-11-20 13:28:21.191293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.466 [2024-11-20 13:28:21.221509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.466 Running I/O for 1 seconds... 00:09:09.466 Running I/O for 1 seconds... 00:09:09.466 [2024-11-20 13:28:21.275619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.466 [2024-11-20 13:28:21.288845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.466 Running I/O for 1 seconds... 00:09:09.724 Running I/O for 1 seconds... 00:09:10.658 5448.00 IOPS, 21.28 MiB/s [2024-11-20T13:28:22.615Z] 7556.00 IOPS, 29.52 MiB/s 00:09:10.658 Latency(us) 00:09:10.658 [2024-11-20T13:28:22.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.658 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.658 Nvme1n1 : 1.01 7597.34 29.68 0.00 0.00 16745.07 8340.95 25856.93 00:09:10.658 [2024-11-20T13:28:22.615Z] =================================================================================================================== 00:09:10.658 [2024-11-20T13:28:22.615Z] Total : 7597.34 29.68 0.00 0.00 16745.07 8340.95 25856.93 00:09:10.658 00:09:10.658 Latency(us) 00:09:10.658 [2024-11-20T13:28:22.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.658 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.658 Nvme1n1 : 1.02 5449.86 21.29 0.00 0.00 23117.09 3991.74 37653.41 00:09:10.658 [2024-11-20T13:28:22.615Z] =================================================================================================================== 00:09:10.658 [2024-11-20T13:28:22.615Z] Total : 5449.86 21.29 0.00 0.00 23117.09 3991.74 37653.41 00:09:10.658 5497.00 IOPS, 21.47 MiB/s 00:09:10.658 Latency(us) 00:09:10.658 [2024-11-20T13:28:22.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.658 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.658 Nvme1n1 : 1.01 5629.76 21.99 0.00 0.00 22656.24 6047.19 50998.92 00:09:10.658 [2024-11-20T13:28:22.615Z] =================================================================================================================== 00:09:10.658 [2024-11-20T13:28:22.615Z] Total : 5629.76 21.99 0.00 0.00 22656.24 6047.19 50998.92 00:09:10.658 166064.00 IOPS, 648.69 MiB/s 00:09:10.658 Latency(us) 00:09:10.658 [2024-11-20T13:28:22.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.658 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.658 Nvme1n1 : 1.00 165717.53 647.33 0.00 0.00 768.30 383.53 2070.34 00:09:10.658 [2024-11-20T13:28:22.615Z] =================================================================================================================== 00:09:10.658 [2024-11-20T13:28:22.615Z] Total : 165717.53 647.33 0.00 0.00 768.30 383.53 2070.34 00:09:10.658 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64308 00:09:10.658 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64310 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64311 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.917 rmmod nvme_tcp 00:09:10.917 rmmod nvme_fabrics 00:09:10.917 rmmod nvme_keyring 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:10.917 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64261 ']' 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64261 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64261 ']' 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64261 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64261 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.918 killing process with pid 64261 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64261' 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64261 00:09:10.918 13:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64261 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.176 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:11.434 ************************************ 00:09:11.434 END TEST nvmf_bdev_io_wait 00:09:11.434 ************************************ 00:09:11.434 00:09:11.434 real 0m4.705s 00:09:11.434 user 0m19.316s 00:09:11.434 sys 0m2.390s 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.434 ************************************ 00:09:11.434 START TEST nvmf_queue_depth 00:09:11.434 ************************************ 00:09:11.434 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.693 * Looking for test storage... 00:09:11.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.693 --rc genhtml_branch_coverage=1 00:09:11.693 --rc genhtml_function_coverage=1 00:09:11.693 --rc genhtml_legend=1 00:09:11.693 --rc geninfo_all_blocks=1 00:09:11.693 --rc geninfo_unexecuted_blocks=1 00:09:11.693 00:09:11.693 ' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.693 --rc genhtml_branch_coverage=1 00:09:11.693 --rc genhtml_function_coverage=1 00:09:11.693 --rc genhtml_legend=1 00:09:11.693 --rc geninfo_all_blocks=1 00:09:11.693 --rc geninfo_unexecuted_blocks=1 00:09:11.693 00:09:11.693 ' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.693 --rc genhtml_branch_coverage=1 00:09:11.693 --rc genhtml_function_coverage=1 00:09:11.693 --rc genhtml_legend=1 00:09:11.693 --rc geninfo_all_blocks=1 00:09:11.693 --rc geninfo_unexecuted_blocks=1 00:09:11.693 00:09:11.693 ' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.693 --rc genhtml_branch_coverage=1 00:09:11.693 --rc genhtml_function_coverage=1 00:09:11.693 --rc genhtml_legend=1 00:09:11.693 --rc geninfo_all_blocks=1 00:09:11.693 --rc geninfo_unexecuted_blocks=1 00:09:11.693 00:09:11.693 ' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.693 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:11.694 Cannot find device "nvmf_init_br" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:11.694 Cannot find device "nvmf_init_br2" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:11.694 Cannot find device "nvmf_tgt_br" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.694 Cannot find device "nvmf_tgt_br2" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:11.694 Cannot find device "nvmf_init_br" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:11.694 Cannot find device "nvmf_init_br2" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:11.694 Cannot find device "nvmf_tgt_br" 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:11.694 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:11.953 Cannot find device "nvmf_tgt_br2" 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:11.953 Cannot find device "nvmf_br" 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:11.953 Cannot find device "nvmf_init_if" 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:11.953 Cannot find device "nvmf_init_if2" 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:11.953 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:12.212 00:09:12.212 --- 10.0.0.3 ping statistics --- 00:09:12.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.212 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:09:12.212 00:09:12.212 --- 10.0.0.4 ping statistics --- 00:09:12.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.212 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:09:12.212 00:09:12.212 --- 10.0.0.1 ping statistics --- 00:09:12.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.212 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:12.212 13:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:09:12.212 00:09:12.212 --- 10.0.0.2 ping statistics --- 00:09:12.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.212 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64598 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64598 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64598 ']' 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.212 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.213 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.213 13:28:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.213 [2024-11-20 13:28:24.092266] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:12.213 [2024-11-20 13:28:24.092364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.486 [2024-11-20 13:28:24.244898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.486 [2024-11-20 13:28:24.326717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.486 [2024-11-20 13:28:24.326819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.486 [2024-11-20 13:28:24.326834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.486 [2024-11-20 13:28:24.326844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.486 [2024-11-20 13:28:24.326854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.486 [2024-11-20 13:28:24.327411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.486 [2024-11-20 13:28:24.407953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 [2024-11-20 13:28:25.206681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 Malloc0 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 [2024-11-20 13:28:25.260231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64630 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64630 /var/tmp/bdevperf.sock 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64630 ']' 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:13.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.433 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 [2024-11-20 13:28:25.340124] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:13.433 [2024-11-20 13:28:25.340329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64630 ] 00:09:13.691 [2024-11-20 13:28:25.494129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.691 [2024-11-20 13:28:25.599149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.949 [2024-11-20 13:28:25.694448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.949 NVMe0n1 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.949 13:28:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.207 Running I/O for 10 seconds... 00:09:16.516 5970.00 IOPS, 23.32 MiB/s [2024-11-20T13:28:29.412Z] 6215.00 IOPS, 24.28 MiB/s [2024-11-20T13:28:30.357Z] 6487.33 IOPS, 25.34 MiB/s [2024-11-20T13:28:31.293Z] 6634.00 IOPS, 25.91 MiB/s [2024-11-20T13:28:32.228Z] 6800.00 IOPS, 26.56 MiB/s [2024-11-20T13:28:33.167Z] 7015.00 IOPS, 27.40 MiB/s [2024-11-20T13:28:34.103Z] 7176.00 IOPS, 28.03 MiB/s [2024-11-20T13:28:35.484Z] 7303.62 IOPS, 28.53 MiB/s [2024-11-20T13:28:36.419Z] 7402.33 IOPS, 28.92 MiB/s [2024-11-20T13:28:36.419Z] 7473.60 IOPS, 29.19 MiB/s 00:09:24.462 Latency(us) 00:09:24.462 [2024-11-20T13:28:36.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.462 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:24.462 Verification LBA range: start 0x0 length 0x4000 00:09:24.462 NVMe0n1 : 10.11 7491.58 29.26 0.00 0.00 135951.18 28955.00 105334.23 00:09:24.462 [2024-11-20T13:28:36.419Z] =================================================================================================================== 00:09:24.462 [2024-11-20T13:28:36.419Z] Total : 7491.58 29.26 0.00 0.00 135951.18 28955.00 105334.23 00:09:24.462 { 00:09:24.462 "results": [ 00:09:24.462 { 00:09:24.462 "job": "NVMe0n1", 00:09:24.462 "core_mask": "0x1", 00:09:24.462 "workload": "verify", 00:09:24.462 "status": "finished", 00:09:24.462 "verify_range": { 00:09:24.462 "start": 0, 00:09:24.462 "length": 16384 00:09:24.462 }, 00:09:24.462 "queue_depth": 1024, 00:09:24.462 "io_size": 4096, 00:09:24.462 "runtime": 10.10842, 00:09:24.462 "iops": 7491.576329436252, 00:09:24.462 "mibps": 29.26397003686036, 00:09:24.462 "io_failed": 0, 00:09:24.462 "io_timeout": 0, 00:09:24.462 "avg_latency_us": 135951.17914833952, 00:09:24.462 "min_latency_us": 28954.996363636365, 00:09:24.462 "max_latency_us": 105334.22545454545 00:09:24.462 } 00:09:24.462 ], 00:09:24.462 "core_count": 1 00:09:24.462 } 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64630 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64630 ']' 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64630 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64630 00:09:24.462 killing process with pid 64630 00:09:24.462 Received shutdown signal, test time was about 10.000000 seconds 00:09:24.462 00:09:24.462 Latency(us) 00:09:24.462 [2024-11-20T13:28:36.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.462 [2024-11-20T13:28:36.419Z] =================================================================================================================== 00:09:24.462 [2024-11-20T13:28:36.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64630' 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64630 00:09:24.462 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64630 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.720 rmmod nvme_tcp 00:09:24.720 rmmod nvme_fabrics 00:09:24.720 rmmod nvme_keyring 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:24.720 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64598 ']' 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64598 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64598 ']' 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64598 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64598 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.721 killing process with pid 64598 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64598' 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64598 00:09:24.721 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64598 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.979 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:25.238 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:25.238 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:25.238 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:25.238 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:25.238 13:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:25.238 00:09:25.238 real 0m13.769s 00:09:25.238 user 0m22.967s 00:09:25.238 sys 0m2.562s 00:09:25.238 ************************************ 00:09:25.238 END TEST nvmf_queue_depth 00:09:25.238 ************************************ 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 ************************************ 00:09:25.238 START TEST nvmf_target_multipath 00:09:25.238 ************************************ 00:09:25.238 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:25.497 * Looking for test storage... 00:09:25.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.497 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.498 --rc genhtml_branch_coverage=1 00:09:25.498 --rc genhtml_function_coverage=1 00:09:25.498 --rc genhtml_legend=1 00:09:25.498 --rc geninfo_all_blocks=1 00:09:25.498 --rc geninfo_unexecuted_blocks=1 00:09:25.498 00:09:25.498 ' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.498 --rc genhtml_branch_coverage=1 00:09:25.498 --rc genhtml_function_coverage=1 00:09:25.498 --rc genhtml_legend=1 00:09:25.498 --rc geninfo_all_blocks=1 00:09:25.498 --rc geninfo_unexecuted_blocks=1 00:09:25.498 00:09:25.498 ' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.498 --rc genhtml_branch_coverage=1 00:09:25.498 --rc genhtml_function_coverage=1 00:09:25.498 --rc genhtml_legend=1 00:09:25.498 --rc geninfo_all_blocks=1 00:09:25.498 --rc geninfo_unexecuted_blocks=1 00:09:25.498 00:09:25.498 ' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.498 --rc genhtml_branch_coverage=1 00:09:25.498 --rc genhtml_function_coverage=1 00:09:25.498 --rc genhtml_legend=1 00:09:25.498 --rc geninfo_all_blocks=1 00:09:25.498 --rc geninfo_unexecuted_blocks=1 00:09:25.498 00:09:25.498 ' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.498 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:25.499 Cannot find device "nvmf_init_br" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:25.499 Cannot find device "nvmf_init_br2" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:25.499 Cannot find device "nvmf_tgt_br" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.499 Cannot find device "nvmf_tgt_br2" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:25.499 Cannot find device "nvmf_init_br" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:25.499 Cannot find device "nvmf_init_br2" 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:25.499 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:25.757 Cannot find device "nvmf_tgt_br" 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:25.757 Cannot find device "nvmf_tgt_br2" 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:25.757 Cannot find device "nvmf_br" 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:25.757 Cannot find device "nvmf_init_if" 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:25.757 Cannot find device "nvmf_init_if2" 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.757 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:25.758 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:26.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:09:26.017 00:09:26.017 --- 10.0.0.3 ping statistics --- 00:09:26.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.017 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:26.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:26.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:09:26.017 00:09:26.017 --- 10.0.0.4 ping statistics --- 00:09:26.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.017 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:26.017 00:09:26.017 --- 10.0.0.1 ping statistics --- 00:09:26.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.017 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:26.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:09:26.017 00:09:26.017 --- 10.0.0.2 ping statistics --- 00:09:26.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.017 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65005 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65005 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65005 ']' 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.017 13:28:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.017 [2024-11-20 13:28:37.893258] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:26.017 [2024-11-20 13:28:37.893424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.276 [2024-11-20 13:28:38.052532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.276 [2024-11-20 13:28:38.120924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.276 [2024-11-20 13:28:38.120991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.276 [2024-11-20 13:28:38.121003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.276 [2024-11-20 13:28:38.121012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.276 [2024-11-20 13:28:38.121020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.276 [2024-11-20 13:28:38.124248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.276 [2024-11-20 13:28:38.124388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.276 [2024-11-20 13:28:38.124472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.276 [2024-11-20 13:28:38.124482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.276 [2024-11-20 13:28:38.180845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.216 13:28:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.216 13:28:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:27.216 13:28:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.216 13:28:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.216 13:28:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.216 13:28:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.216 13:28:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.474 [2024-11-20 13:28:39.297484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.474 13:28:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:28.041 Malloc0 00:09:28.042 13:28:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:28.300 13:28:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.866 13:28:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.123 [2024-11-20 13:28:40.925371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.123 13:28:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:29.381 [2024-11-20 13:28:41.225604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:29.381 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:29.639 13:28:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:32.181 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:32.181 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65110 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:32.182 13:28:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:32.182 [global] 00:09:32.182 thread=1 00:09:32.182 invalidate=1 00:09:32.182 rw=randrw 00:09:32.182 time_based=1 00:09:32.182 runtime=6 00:09:32.182 ioengine=libaio 00:09:32.182 direct=1 00:09:32.182 bs=4096 00:09:32.182 iodepth=128 00:09:32.182 norandommap=0 00:09:32.182 numjobs=1 00:09:32.182 00:09:32.182 verify_dump=1 00:09:32.182 verify_backlog=512 00:09:32.182 verify_state_save=0 00:09:32.182 do_verify=1 00:09:32.182 verify=crc32c-intel 00:09:32.182 [job0] 00:09:32.182 filename=/dev/nvme0n1 00:09:32.182 Could not set queue depth (nvme0n1) 00:09:32.182 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.182 fio-3.35 00:09:32.182 Starting 1 thread 00:09:32.747 13:28:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:33.006 13:28:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:33.572 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:33.830 13:28:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.395 13:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65110 00:09:38.604 00:09:38.604 job0: (groupid=0, jobs=1): err= 0: pid=65135: Wed Nov 20 13:28:49 2024 00:09:38.604 read: IOPS=9440, BW=36.9MiB/s (38.7MB/s)(222MiB/6007msec) 00:09:38.604 slat (usec): min=5, max=7724, avg=62.59, stdev=274.72 00:09:38.604 clat (usec): min=1142, max=42118, avg=9290.77, stdev=2411.55 00:09:38.604 lat (usec): min=1173, max=42127, avg=9353.36, stdev=2423.39 00:09:38.604 clat percentiles (usec): 00:09:38.604 | 1.00th=[ 4424], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 7898], 00:09:38.604 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:09:38.604 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[12518], 95.00th=[13698], 00:09:38.604 | 99.00th=[16319], 99.50th=[19792], 99.90th=[28443], 99.95th=[35914], 00:09:38.604 | 99.99th=[42206] 00:09:38.604 bw ( KiB/s): min= 6232, max=23864, per=50.73%, avg=19158.67, stdev=5669.53, samples=12 00:09:38.605 iops : min= 1558, max= 5966, avg=4789.67, stdev=1417.38, samples=12 00:09:38.605 write: IOPS=5474, BW=21.4MiB/s (22.4MB/s)(113MiB/5280msec); 0 zone resets 00:09:38.605 slat (usec): min=10, max=6303, avg=71.79, stdev=177.21 00:09:38.605 clat (usec): min=1063, max=39751, avg=8135.21, stdev=2295.16 00:09:38.605 lat (usec): min=1090, max=39807, avg=8207.01, stdev=2307.63 00:09:38.605 clat percentiles (usec): 00:09:38.605 | 1.00th=[ 3392], 5.00th=[ 4621], 10.00th=[ 5932], 20.00th=[ 7046], 00:09:38.605 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:09:38.605 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[11731], 00:09:38.605 | 99.00th=[13698], 99.50th=[16450], 99.90th=[35390], 99.95th=[36439], 00:09:38.605 | 99.99th=[39584] 00:09:38.605 bw ( KiB/s): min= 6816, max=23992, per=87.81%, avg=19229.92, stdev=5418.21, samples=12 00:09:38.605 iops : min= 1704, max= 5998, avg=4807.42, stdev=1354.54, samples=12 00:09:38.605 lat (msec) : 2=0.08%, 4=1.24%, 10=77.19%, 20=21.11%, 50=0.38% 00:09:38.605 cpu : usr=5.56%, sys=22.76%, ctx=4842, majf=0, minf=90 00:09:38.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:38.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.605 issued rwts: total=56708,28907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.605 00:09:38.605 Run status group 0 (all jobs): 00:09:38.605 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=222MiB (232MB), run=6007-6007msec 00:09:38.605 WRITE: bw=21.4MiB/s (22.4MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=113MiB (118MB), run=5280-5280msec 00:09:38.605 00:09:38.605 Disk stats (read/write): 00:09:38.605 nvme0n1: ios=55889/28365, merge=0/0, ticks=494700/215329, in_queue=710029, util=98.71% 00:09:38.605 13:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:38.605 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65217 00:09:38.863 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:38.864 13:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:38.864 [global] 00:09:38.864 thread=1 00:09:38.864 invalidate=1 00:09:38.864 rw=randrw 00:09:38.864 time_based=1 00:09:38.864 runtime=6 00:09:38.864 ioengine=libaio 00:09:38.864 direct=1 00:09:38.864 bs=4096 00:09:38.864 iodepth=128 00:09:38.864 norandommap=0 00:09:38.864 numjobs=1 00:09:38.864 00:09:38.864 verify_dump=1 00:09:38.864 verify_backlog=512 00:09:38.864 verify_state_save=0 00:09:38.864 do_verify=1 00:09:38.864 verify=crc32c-intel 00:09:38.864 [job0] 00:09:38.864 filename=/dev/nvme0n1 00:09:38.864 Could not set queue depth (nvme0n1) 00:09:39.122 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.122 fio-3.35 00:09:39.122 Starting 1 thread 00:09:40.056 13:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:40.313 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:40.571 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:40.828 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:41.086 13:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65217 00:09:45.269 00:09:45.269 job0: (groupid=0, jobs=1): err= 0: pid=65238: Wed Nov 20 13:28:57 2024 00:09:45.269 read: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(258MiB/6002msec) 00:09:45.269 slat (usec): min=5, max=8821, avg=44.87, stdev=200.79 00:09:45.269 clat (usec): min=287, max=18372, avg=7957.31, stdev=2552.83 00:09:45.269 lat (usec): min=296, max=18759, avg=8002.18, stdev=2567.90 00:09:45.269 clat percentiles (usec): 00:09:45.269 | 1.00th=[ 979], 5.00th=[ 3261], 10.00th=[ 4752], 20.00th=[ 5997], 00:09:45.269 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:45.269 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[12387], 00:09:45.270 | 99.00th=[14484], 99.50th=[15664], 99.90th=[17171], 99.95th=[17433], 00:09:45.270 | 99.99th=[18220] 00:09:45.270 bw ( KiB/s): min=11072, max=37992, per=53.16%, avg=23432.55, stdev=7746.32, samples=11 00:09:45.270 iops : min= 2768, max= 9498, avg=5858.09, stdev=1936.59, samples=11 00:09:45.270 write: IOPS=6399, BW=25.0MiB/s (26.2MB/s)(136MiB/5449msec); 0 zone resets 00:09:45.270 slat (usec): min=13, max=2769, avg=56.20, stdev=143.04 00:09:45.270 clat (usec): min=277, max=18095, avg=6764.08, stdev=2204.78 00:09:45.270 lat (usec): min=303, max=18128, avg=6820.28, stdev=2218.77 00:09:45.270 clat percentiles (usec): 00:09:45.270 | 1.00th=[ 1385], 5.00th=[ 3163], 10.00th=[ 3818], 20.00th=[ 4555], 00:09:45.270 | 30.00th=[ 5473], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7635], 00:09:45.270 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[10028], 00:09:45.270 | 99.00th=[12387], 99.50th=[13698], 99.90th=[16057], 99.95th=[16581], 00:09:45.270 | 99.99th=[17171] 00:09:45.270 bw ( KiB/s): min=11680, max=37184, per=91.56%, avg=23437.64, stdev=7531.74, samples=11 00:09:45.270 iops : min= 2920, max= 9296, avg=5859.36, stdev=1882.94, samples=11 00:09:45.270 lat (usec) : 500=0.07%, 750=0.23%, 1000=0.57% 00:09:45.270 lat (msec) : 2=1.40%, 4=6.41%, 10=80.69%, 20=10.63% 00:09:45.270 cpu : usr=5.65%, sys=23.68%, ctx=6089, majf=0, minf=78 00:09:45.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.270 issued rwts: total=66138,34869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.270 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.270 00:09:45.270 Run status group 0 (all jobs): 00:09:45.270 READ: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=258MiB (271MB), run=6002-6002msec 00:09:45.270 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=136MiB (143MB), run=5449-5449msec 00:09:45.270 00:09:45.270 Disk stats (read/write): 00:09:45.270 nvme0n1: ios=65290/34315, merge=0/0, ticks=495784/216070, in_queue=711854, util=98.63% 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:45.270 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.834 rmmod nvme_tcp 00:09:45.834 rmmod nvme_fabrics 00:09:45.834 rmmod nvme_keyring 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65005 ']' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65005 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65005 ']' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65005 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65005 00:09:45.834 killing process with pid 65005 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65005' 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65005 00:09:45.834 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65005 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.094 13:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.094 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.094 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.352 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:46.353 00:09:46.353 real 0m20.991s 00:09:46.353 user 1m18.888s 00:09:46.353 sys 0m10.291s 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.353 ************************************ 00:09:46.353 END TEST nvmf_target_multipath 00:09:46.353 ************************************ 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.353 ************************************ 00:09:46.353 START TEST nvmf_zcopy 00:09:46.353 ************************************ 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.353 * Looking for test storage... 00:09:46.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.353 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.612 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.612 --rc genhtml_branch_coverage=1 00:09:46.612 --rc genhtml_function_coverage=1 00:09:46.612 --rc genhtml_legend=1 00:09:46.612 --rc geninfo_all_blocks=1 00:09:46.612 --rc geninfo_unexecuted_blocks=1 00:09:46.612 00:09:46.612 ' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.613 --rc genhtml_branch_coverage=1 00:09:46.613 --rc genhtml_function_coverage=1 00:09:46.613 --rc genhtml_legend=1 00:09:46.613 --rc geninfo_all_blocks=1 00:09:46.613 --rc geninfo_unexecuted_blocks=1 00:09:46.613 00:09:46.613 ' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.613 --rc genhtml_branch_coverage=1 00:09:46.613 --rc genhtml_function_coverage=1 00:09:46.613 --rc genhtml_legend=1 00:09:46.613 --rc geninfo_all_blocks=1 00:09:46.613 --rc geninfo_unexecuted_blocks=1 00:09:46.613 00:09:46.613 ' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.613 --rc genhtml_branch_coverage=1 00:09:46.613 --rc genhtml_function_coverage=1 00:09:46.613 --rc genhtml_legend=1 00:09:46.613 --rc geninfo_all_blocks=1 00:09:46.613 --rc geninfo_unexecuted_blocks=1 00:09:46.613 00:09:46.613 ' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:46.613 Cannot find device "nvmf_init_br" 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:46.613 Cannot find device "nvmf_init_br2" 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:46.613 Cannot find device "nvmf_tgt_br" 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.613 Cannot find device "nvmf_tgt_br2" 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:46.613 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:46.613 Cannot find device "nvmf_init_br" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:46.614 Cannot find device "nvmf_init_br2" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:46.614 Cannot find device "nvmf_tgt_br" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:46.614 Cannot find device "nvmf_tgt_br2" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:46.614 Cannot find device "nvmf_br" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:46.614 Cannot find device "nvmf_init_if" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:46.614 Cannot find device "nvmf_init_if2" 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.614 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.872 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:46.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:46.873 00:09:46.873 --- 10.0.0.3 ping statistics --- 00:09:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.873 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:46.873 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:46.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:46.873 00:09:46.873 --- 10.0.0.4 ping statistics --- 00:09:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.873 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:46.873 00:09:46.873 --- 10.0.0.1 ping statistics --- 00:09:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.873 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:46.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:46.873 00:09:46.873 --- 10.0.0.2 ping statistics --- 00:09:46.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.873 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65566 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65566 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65566 ']' 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.873 13:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.132 [2024-11-20 13:28:58.835008] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:47.132 [2024-11-20 13:28:58.835108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.132 [2024-11-20 13:28:58.984450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.132 [2024-11-20 13:28:59.049140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.132 [2024-11-20 13:28:59.049222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.132 [2024-11-20 13:28:59.049236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.132 [2024-11-20 13:28:59.049244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.132 [2024-11-20 13:28:59.049251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.132 [2024-11-20 13:28:59.049657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.391 [2024-11-20 13:28:59.105053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.958 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.958 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:47.958 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.958 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.958 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.275 [2024-11-20 13:28:59.940876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.275 [2024-11-20 13:28:59.956986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.275 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.276 malloc0 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:48.276 13:28:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:48.276 { 00:09:48.276 "params": { 00:09:48.276 "name": "Nvme$subsystem", 00:09:48.276 "trtype": "$TEST_TRANSPORT", 00:09:48.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.276 "adrfam": "ipv4", 00:09:48.276 "trsvcid": "$NVMF_PORT", 00:09:48.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.276 "hdgst": ${hdgst:-false}, 00:09:48.276 "ddgst": ${ddgst:-false} 00:09:48.276 }, 00:09:48.276 "method": "bdev_nvme_attach_controller" 00:09:48.276 } 00:09:48.276 EOF 00:09:48.276 )") 00:09:48.276 13:29:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:48.276 13:29:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:48.276 13:29:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:48.276 13:29:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:48.276 "params": { 00:09:48.276 "name": "Nvme1", 00:09:48.276 "trtype": "tcp", 00:09:48.276 "traddr": "10.0.0.3", 00:09:48.276 "adrfam": "ipv4", 00:09:48.276 "trsvcid": "4420", 00:09:48.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:48.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:48.276 "hdgst": false, 00:09:48.276 "ddgst": false 00:09:48.276 }, 00:09:48.276 "method": "bdev_nvme_attach_controller" 00:09:48.276 }' 00:09:48.276 [2024-11-20 13:29:00.050690] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:48.276 [2024-11-20 13:29:00.050819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65600 ] 00:09:48.276 [2024-11-20 13:29:00.193155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.533 [2024-11-20 13:29:00.281693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.533 [2024-11-20 13:29:00.369590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.792 Running I/O for 10 seconds... 00:09:50.659 5655.00 IOPS, 44.18 MiB/s [2024-11-20T13:29:03.548Z] 5698.50 IOPS, 44.52 MiB/s [2024-11-20T13:29:04.922Z] 5587.00 IOPS, 43.65 MiB/s [2024-11-20T13:29:05.857Z] 5528.25 IOPS, 43.19 MiB/s [2024-11-20T13:29:06.792Z] 5482.60 IOPS, 42.83 MiB/s [2024-11-20T13:29:07.727Z] 5443.33 IOPS, 42.53 MiB/s [2024-11-20T13:29:08.662Z] 5489.57 IOPS, 42.89 MiB/s [2024-11-20T13:29:09.597Z] 5521.88 IOPS, 43.14 MiB/s [2024-11-20T13:29:10.531Z] 5552.00 IOPS, 43.38 MiB/s [2024-11-20T13:29:10.531Z] 5578.80 IOPS, 43.58 MiB/s 00:09:58.574 Latency(us) 00:09:58.574 [2024-11-20T13:29:10.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.574 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:58.574 Verification LBA range: start 0x0 length 0x1000 00:09:58.574 Nvme1n1 : 10.02 5581.01 43.60 0.00 0.00 22861.78 2189.50 32648.84 00:09:58.574 [2024-11-20T13:29:10.531Z] =================================================================================================================== 00:09:58.574 [2024-11-20T13:29:10.531Z] Total : 5581.01 43.60 0.00 0.00 22861.78 2189.50 32648.84 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65718 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:58.833 { 00:09:58.833 "params": { 00:09:58.833 "name": "Nvme$subsystem", 00:09:58.833 "trtype": "$TEST_TRANSPORT", 00:09:58.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.833 "adrfam": "ipv4", 00:09:58.833 "trsvcid": "$NVMF_PORT", 00:09:58.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.833 "hdgst": ${hdgst:-false}, 00:09:58.833 "ddgst": ${ddgst:-false} 00:09:58.833 }, 00:09:58.833 "method": "bdev_nvme_attach_controller" 00:09:58.833 } 00:09:58.833 EOF 00:09:58.833 )") 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:58.833 [2024-11-20 13:29:10.736734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.833 [2024-11-20 13:29:10.736784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:58.833 13:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:58.833 "params": { 00:09:58.833 "name": "Nvme1", 00:09:58.833 "trtype": "tcp", 00:09:58.833 "traddr": "10.0.0.3", 00:09:58.833 "adrfam": "ipv4", 00:09:58.833 "trsvcid": "4420", 00:09:58.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.833 "hdgst": false, 00:09:58.833 "ddgst": false 00:09:58.833 }, 00:09:58.833 "method": "bdev_nvme_attach_controller" 00:09:58.833 }' 00:09:58.833 [2024-11-20 13:29:10.748691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.833 [2024-11-20 13:29:10.748723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.833 [2024-11-20 13:29:10.760683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.833 [2024-11-20 13:29:10.760713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.833 [2024-11-20 13:29:10.772689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.833 [2024-11-20 13:29:10.772718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.833 [2024-11-20 13:29:10.773099] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:58.833 [2024-11-20 13:29:10.773180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65718 ] 00:09:58.833 [2024-11-20 13:29:10.784682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.833 [2024-11-20 13:29:10.784710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.796690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.796722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.804689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.804717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.816694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.816722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.828693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.828723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.840697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.840724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.852701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.852729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.864713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.864745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.876712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.876739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.888726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.888760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.900721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.900756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.912723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.091 [2024-11-20 13:29:10.912752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.091 [2024-11-20 13:29:10.917074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.091 [2024-11-20 13:29:10.924737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.924768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.936747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.936780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.948743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.948771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.956745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.956777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.968768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.968801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.978611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.092 [2024-11-20 13:29:10.980756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.980786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:10.992765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:10.992802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:11.004782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:11.004823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:11.016786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:11.016833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:11.028796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:11.028837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:11.040792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.092 [2024-11-20 13:29:11.040830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.092 [2024-11-20 13:29:11.041459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.351 [2024-11-20 13:29:11.052798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.052837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.064800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.064839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.076789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.076821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.088784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.088812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.100811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.100847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.112814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.112846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.124824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.124865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.136834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.136877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.148842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.148881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.160892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.160929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 Running I/O for 5 seconds... 00:09:59.351 [2024-11-20 13:29:11.176601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.176652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.192523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.192563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.209978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.210016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.226465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.226503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.242555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.242594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.260842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.260896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.275787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.275826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.351 [2024-11-20 13:29:11.291417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.351 [2024-11-20 13:29:11.291455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.309776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.309814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.324475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.324512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.334637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.334674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.350234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.350271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.366257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.366290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.383720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.383757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.400558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.400599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.417035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.417073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.434144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.434181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.452472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.452509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.467685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.467723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.483955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.483993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.493050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.493088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.509735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.509774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.526090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.526132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.536296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.536336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.609 [2024-11-20 13:29:11.550946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.609 [2024-11-20 13:29:11.550987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.567446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.567492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.584285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.584328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.600871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.600913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.616203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.616244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.626075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.626128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.641695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.641731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.658891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.658926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.674253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.674291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.868 [2024-11-20 13:29:11.693255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.868 [2024-11-20 13:29:11.693291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.708228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.708263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.717490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.717527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.732447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.732483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.743007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.743043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.757640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.757675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.767170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.767217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.783286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.783324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.799871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.799911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.869 [2024-11-20 13:29:11.815886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.869 [2024-11-20 13:29:11.815926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.825810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.825848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.841702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.841744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.858467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.858505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.876238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.876275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.891369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.891407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.900526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.900563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.916854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.916902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.934484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.934522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.949341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.949377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.965054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.965091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.983600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.983637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:11.998531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:11.998569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:12.008513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:12.008550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:12.024132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:12.024196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:12.040947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:12.040984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:12.057236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:12.057272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.127 [2024-11-20 13:29:12.073738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.127 [2024-11-20 13:29:12.073775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.090167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.090215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.107461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.107500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.123396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.123432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.141560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.141598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.156426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.156464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 11360.00 IOPS, 88.75 MiB/s [2024-11-20T13:29:12.343Z] [2024-11-20 13:29:12.172851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.172901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.192093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.192135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.207403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.207446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.226231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.226266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.241965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.242003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.256708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.256747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.272521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.272561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.290664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.290703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.305771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.305811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.323277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.323314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.386 [2024-11-20 13:29:12.338770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.386 [2024-11-20 13:29:12.338806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.348289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.348326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.364698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.364750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.379838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.379890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.395556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.395625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.413646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.413688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.427374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.427414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.442933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.442974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.454980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.455018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.470989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.471031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.488783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.488828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.505134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.505174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.523976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.524014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.539177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.539227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.555792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.555831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.572198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.572256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.645 [2024-11-20 13:29:12.589467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.645 [2024-11-20 13:29:12.589506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.605056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.605094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.614362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.614419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.630806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.630860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.646974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.647029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.665485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.665523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.680119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.680156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.695773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.695825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.713230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.713268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.727986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.728024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.743702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.743756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.762935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.762988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.778146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.778214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.796174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.796224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.811055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.904 [2024-11-20 13:29:12.811092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.904 [2024-11-20 13:29:12.826968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.905 [2024-11-20 13:29:12.827006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.905 [2024-11-20 13:29:12.843922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.905 [2024-11-20 13:29:12.844106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.860912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.860951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.877378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.877423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.894079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.894117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.909709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.909748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.919380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.919416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.935400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.935439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.952254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.952293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.968273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.968310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:12.986315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:12.986356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.000913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.000952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.016878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.016917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.035101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.035144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.050140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.050328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.066713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.066751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.083327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.083364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.099214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.099251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.163 [2024-11-20 13:29:13.116842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.163 [2024-11-20 13:29:13.117022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.131972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.132125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.148462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.148500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.165087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.165128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 11368.00 IOPS, 88.81 MiB/s [2024-11-20T13:29:13.380Z] [2024-11-20 13:29:13.181506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.181545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.198153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.198206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.214721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.214921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.230124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.230303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.239930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.239977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.254563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.254603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.269210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.269247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.279131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.279170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.291198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.291233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.306488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.306532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.322240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.322298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.332091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.332132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.348236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.348283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.423 [2024-11-20 13:29:13.364593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.423 [2024-11-20 13:29:13.364635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.681 [2024-11-20 13:29:13.381591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.681 [2024-11-20 13:29:13.381631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.398043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.398093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.413445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.413495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.431665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.431870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.447442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.447492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.463888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.463935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.480636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.480680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.496785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.496825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.513387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.513426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.531513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.531551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.547153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.547207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.562550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.562589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.578392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.578429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.597233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.597271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.612200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.612267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.682 [2024-11-20 13:29:13.622228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.682 [2024-11-20 13:29:13.622263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.654307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.654367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.669155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.669355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.685171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.685333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.703251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.703288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.718483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.718524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.728432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.728594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.745109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.745153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.760706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.760774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.771608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.771799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.786607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.786778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.803380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.803424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.820216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.820258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.835150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.835205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.845900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.846065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.861140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.861317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.877532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.877571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.940 [2024-11-20 13:29:13.894359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.940 [2024-11-20 13:29:13.894404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.910639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.910677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.929711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.929751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.944248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.944285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.959236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.959273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.969269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.969308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:13.985465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:13.985504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.000743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.000781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.016647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.016686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.033841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.033882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.049810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.049852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.067662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.067701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.082746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.082912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.093173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.093222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.109073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.109110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.126182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.126230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.199 [2024-11-20 13:29:14.144357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.199 [2024-11-20 13:29:14.144394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.158586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.158624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 11336.00 IOPS, 88.56 MiB/s [2024-11-20T13:29:14.415Z] [2024-11-20 13:29:14.176321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.176362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.191212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.191251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.200770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.200808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.216729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.216767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.233050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.233094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.250819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.250867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.265276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.265325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.280360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.280552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.290061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.290102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.305990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.306033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.322838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.322877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.338448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.338486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.348216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.348253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.364976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.365019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.379629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.458 [2024-11-20 13:29:14.379818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.458 [2024-11-20 13:29:14.394438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.459 [2024-11-20 13:29:14.394636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.459 [2024-11-20 13:29:14.410492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.459 [2024-11-20 13:29:14.410542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.427415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.427469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.444073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.444124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.460980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.461044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.479040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.479092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.493954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.494145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.509783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.509935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.528397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.528549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.542110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.542276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.558426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.558581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.575078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.575249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.590712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.590868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.606960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.607162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.625232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.625440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.640040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.640248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.716 [2024-11-20 13:29:14.657755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.716 [2024-11-20 13:29:14.657966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.674448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.674701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.690558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.690756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.708678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.708886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.723492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.723689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.740490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.740654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.756665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.756818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.772559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.772710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.782017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.782056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.798803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.798966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.815081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.815120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.833306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.833345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.848151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.848197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.857833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.857870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.873959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.874000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.890310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.890351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.908832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.909015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.974 [2024-11-20 13:29:14.923852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.974 [2024-11-20 13:29:14.924010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:14.935179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:14.935231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:14.950055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:14.950094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:14.967805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:14.967844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:14.983269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:14.983306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:14.999901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:14.999941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.016570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.016609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.033304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.033342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.050134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.050177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.066670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.066856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.084166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.084224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.100663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.100704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.117064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.117106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.136005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.136047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.151405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.151442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 11318.50 IOPS, 88.43 MiB/s [2024-11-20T13:29:15.190Z] [2024-11-20 13:29:15.169842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.170004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.233 [2024-11-20 13:29:15.185297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.233 [2024-11-20 13:29:15.185334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.202341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.202380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.218242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.218281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.227778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.227815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.243997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.244037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.259855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.259894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.279065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.279104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.294364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.294406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.311797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.311837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.327709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.327752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.346373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.346425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.361814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.361861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.379651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.379693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.392999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.393205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.408970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.409143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.418935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.418974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.491 [2024-11-20 13:29:15.434909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.491 [2024-11-20 13:29:15.434948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.450939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.451002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.469520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.469565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.484475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.484520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.494270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.494309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.509508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.509553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.524641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.524682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.534610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.534651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.550807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.550848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.566885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.566926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.585048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.585097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.600165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.600367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.611024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.611063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.625941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.625981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.641601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.641641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.657469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.657509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.674859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.674898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.690109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.690149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.749 [2024-11-20 13:29:15.700107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.749 [2024-11-20 13:29:15.700284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.715549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.715707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.730676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.730834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.746434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.746468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.764200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.764237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.780306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.780343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.797181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.797230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.815248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.815285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.830484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.830523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.847678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.847720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.864147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.864200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.882158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.882341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.897408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.897575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.907329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.907366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.924051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.924093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.007 [2024-11-20 13:29:15.940768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.007 [2024-11-20 13:29:15.940975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.008 [2024-11-20 13:29:15.955912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.008 [2024-11-20 13:29:15.956074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:15.972157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:15.972211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:15.990363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:15.990401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.006540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.006579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.022835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.022874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.040700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.040739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.055652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.055691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.070762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.070802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.079977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.080015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.094972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.095014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.110200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.110236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.119998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.120036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.135147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.135197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.145623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.145796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-11-20 13:29:16.161107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-11-20 13:29:16.161279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.267 11325.20 IOPS, 88.48 MiB/s [2024-11-20T13:29:16.224Z] [2024-11-20 13:29:16.176927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.267 [2024-11-20 13:29:16.177081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.267 00:10:04.267 Latency(us) 00:10:04.267 [2024-11-20T13:29:16.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.267 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:04.267 Nvme1n1 : 5.01 11335.60 88.56 0.00 0.00 11275.87 4825.83 19422.49 00:10:04.267 [2024-11-20T13:29:16.224Z] =================================================================================================================== 00:10:04.267 [2024-11-20T13:29:16.224Z] Total : 11335.60 88.56 0.00 0.00 11275.87 4825.83 19422.49 00:10:04.267 [2024-11-20 13:29:16.186051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.267 [2024-11-20 13:29:16.186088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.267 [2024-11-20 13:29:16.198041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.267 [2024-11-20 13:29:16.198076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.267 [2024-11-20 13:29:16.210073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.267 [2024-11-20 13:29:16.210118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.222093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.222147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.234081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.234127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.246080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.246123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.258086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.258129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.270093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.270157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.282098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.282149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.294094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.294140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.306093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.306138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.318092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.318135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.330093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.330134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.346113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.346161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.358108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.358149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.370096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.370131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 [2024-11-20 13:29:16.382098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.526 [2024-11-20 13:29:16.382131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.526 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65718) - No such process 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65718 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.526 delay0 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.526 13:29:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:04.785 [2024-11-20 13:29:16.586757] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.349 Initializing NVMe Controllers 00:10:11.349 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.349 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.349 Initialization complete. Launching workers. 00:10:11.349 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 115 00:10:11.349 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 33 00:10:11.349 success 265, unsuccessful 137, failed 0 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.349 rmmod nvme_tcp 00:10:11.349 rmmod nvme_fabrics 00:10:11.349 rmmod nvme_keyring 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65566 ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65566 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65566 ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65566 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65566 00:10:11.349 killing process with pid 65566 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65566' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65566 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65566 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.349 13:29:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:11.349 00:10:11.349 real 0m25.066s 00:10:11.349 user 0m40.575s 00:10:11.349 sys 0m7.061s 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.349 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.349 ************************************ 00:10:11.350 END TEST nvmf_zcopy 00:10:11.350 ************************************ 00:10:11.350 13:29:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.350 13:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.350 13:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.350 13:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.609 ************************************ 00:10:11.609 START TEST nvmf_nmic 00:10:11.609 ************************************ 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.609 * Looking for test storage... 00:10:11.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:11.609 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.610 --rc genhtml_branch_coverage=1 00:10:11.610 --rc genhtml_function_coverage=1 00:10:11.610 --rc genhtml_legend=1 00:10:11.610 --rc geninfo_all_blocks=1 00:10:11.610 --rc geninfo_unexecuted_blocks=1 00:10:11.610 00:10:11.610 ' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.610 --rc genhtml_branch_coverage=1 00:10:11.610 --rc genhtml_function_coverage=1 00:10:11.610 --rc genhtml_legend=1 00:10:11.610 --rc geninfo_all_blocks=1 00:10:11.610 --rc geninfo_unexecuted_blocks=1 00:10:11.610 00:10:11.610 ' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.610 --rc genhtml_branch_coverage=1 00:10:11.610 --rc genhtml_function_coverage=1 00:10:11.610 --rc genhtml_legend=1 00:10:11.610 --rc geninfo_all_blocks=1 00:10:11.610 --rc geninfo_unexecuted_blocks=1 00:10:11.610 00:10:11.610 ' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.610 --rc genhtml_branch_coverage=1 00:10:11.610 --rc genhtml_function_coverage=1 00:10:11.610 --rc genhtml_legend=1 00:10:11.610 --rc geninfo_all_blocks=1 00:10:11.610 --rc geninfo_unexecuted_blocks=1 00:10:11.610 00:10:11.610 ' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.610 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.611 Cannot find device "nvmf_init_br" 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.611 Cannot find device "nvmf_init_br2" 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:11.611 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.870 Cannot find device "nvmf_tgt_br" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.870 Cannot find device "nvmf_tgt_br2" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.870 Cannot find device "nvmf_init_br" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.870 Cannot find device "nvmf_init_br2" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.870 Cannot find device "nvmf_tgt_br" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.870 Cannot find device "nvmf_tgt_br2" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.870 Cannot find device "nvmf_br" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.870 Cannot find device "nvmf_init_if" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.870 Cannot find device "nvmf_init_if2" 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:11.870 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:11.871 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.129 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.129 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.129 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.129 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:12.130 00:10:12.130 --- 10.0.0.3 ping statistics --- 00:10:12.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.130 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.130 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.130 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:10:12.130 00:10:12.130 --- 10.0.0.4 ping statistics --- 00:10:12.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.130 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:12.130 00:10:12.130 --- 10.0.0.1 ping statistics --- 00:10:12.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.130 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:12.130 00:10:12.130 --- 10.0.0.2 ping statistics --- 00:10:12.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.130 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66097 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66097 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66097 ']' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.130 13:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.130 [2024-11-20 13:29:24.022962] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:12.130 [2024-11-20 13:29:24.023069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.388 [2024-11-20 13:29:24.177560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.388 [2024-11-20 13:29:24.246563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.388 [2024-11-20 13:29:24.246876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.388 [2024-11-20 13:29:24.247119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.388 [2024-11-20 13:29:24.247388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.388 [2024-11-20 13:29:24.247500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.388 [2024-11-20 13:29:24.248919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.388 [2024-11-20 13:29:24.249150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.388 [2024-11-20 13:29:24.249058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.388 [2024-11-20 13:29:24.249149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.388 [2024-11-20 13:29:24.306489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 [2024-11-20 13:29:24.423789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 Malloc0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 [2024-11-20 13:29:24.488717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:12.646 test case1: single bdev can't be used in multiple subsystems 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.646 [2024-11-20 13:29:24.516533] bdev.c:8526:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:12.646 [2024-11-20 13:29:24.516576] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:12.646 [2024-11-20 13:29:24.516588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.646 request: 00:10:12.646 { 00:10:12.646 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:12.646 "namespace": { 00:10:12.646 "bdev_name": "Malloc0", 00:10:12.646 "no_auto_visible": false, 00:10:12.646 "hide_metadata": false 00:10:12.646 }, 00:10:12.646 "method": "nvmf_subsystem_add_ns", 00:10:12.646 "req_id": 1 00:10:12.646 } 00:10:12.646 Got JSON-RPC error response 00:10:12.646 response: 00:10:12.646 { 00:10:12.646 "code": -32602, 00:10:12.646 "message": "Invalid parameters" 00:10:12.646 } 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:12.646 Adding namespace failed - expected result. 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:12.646 test case2: host connect to nvmf target in multiple paths 00:10:12.646 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:12.647 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.647 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.647 [2024-11-20 13:29:24.528690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:12.647 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.647 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:12.903 13:29:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:15.438 13:29:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.438 [global] 00:10:15.438 thread=1 00:10:15.438 invalidate=1 00:10:15.438 rw=write 00:10:15.438 time_based=1 00:10:15.438 runtime=1 00:10:15.438 ioengine=libaio 00:10:15.438 direct=1 00:10:15.438 bs=4096 00:10:15.438 iodepth=1 00:10:15.438 norandommap=0 00:10:15.438 numjobs=1 00:10:15.438 00:10:15.438 verify_dump=1 00:10:15.438 verify_backlog=512 00:10:15.438 verify_state_save=0 00:10:15.438 do_verify=1 00:10:15.438 verify=crc32c-intel 00:10:15.438 [job0] 00:10:15.438 filename=/dev/nvme0n1 00:10:15.438 Could not set queue depth (nvme0n1) 00:10:15.438 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.438 fio-3.35 00:10:15.438 Starting 1 thread 00:10:16.409 00:10:16.409 job0: (groupid=0, jobs=1): err= 0: pid=66170: Wed Nov 20 13:29:28 2024 00:10:16.409 read: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:10:16.409 slat (nsec): min=11923, max=44455, avg=14450.43, stdev=2987.64 00:10:16.409 clat (usec): min=141, max=477, avg=188.56, stdev=32.40 00:10:16.409 lat (usec): min=156, max=492, avg=203.01, stdev=32.62 00:10:16.409 clat percentiles (usec): 00:10:16.409 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:16.409 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:16.409 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 225], 95.00th=[ 262], 00:10:16.409 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 400], 99.95th=[ 420], 00:10:16.409 | 99.99th=[ 478] 00:10:16.409 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:16.409 slat (usec): min=16, max=148, avg=21.67, stdev= 6.91 00:10:16.409 clat (usec): min=87, max=2939, avg=116.42, stdev=78.03 00:10:16.409 lat (usec): min=107, max=2967, avg=138.08, stdev=79.82 00:10:16.409 clat percentiles (usec): 00:10:16.409 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:10:16.409 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 114], 00:10:16.409 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 145], 00:10:16.409 | 99.00th=[ 182], 99.50th=[ 225], 99.90th=[ 1450], 99.95th=[ 2114], 00:10:16.409 | 99.99th=[ 2933] 00:10:16.409 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:16.409 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:16.409 lat (usec) : 100=8.74%, 250=88.02%, 500=3.14%, 750=0.02% 00:10:16.409 lat (msec) : 2=0.05%, 4=0.03% 00:10:16.409 cpu : usr=2.50%, sys=8.20%, ctx=5858, majf=0, minf=5 00:10:16.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.409 issued rwts: total=2786,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.409 00:10:16.409 Run status group 0 (all jobs): 00:10:16.409 READ: bw=10.9MiB/s (11.4MB/s), 10.9MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:10:16.409 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:16.409 00:10:16.409 Disk stats (read/write): 00:10:16.409 nvme0n1: ios=2609/2618, merge=0/0, ticks=514/315, in_queue=829, util=90.57% 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.409 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.409 rmmod nvme_tcp 00:10:16.409 rmmod nvme_fabrics 00:10:16.667 rmmod nvme_keyring 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66097 ']' 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66097 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66097 ']' 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66097 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66097 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.667 killing process with pid 66097 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66097' 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66097 00:10:16.667 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66097 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:16.926 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:17.184 ************************************ 00:10:17.184 END TEST nvmf_nmic 00:10:17.184 ************************************ 00:10:17.184 00:10:17.184 real 0m5.659s 00:10:17.184 user 0m16.375s 00:10:17.184 sys 0m2.375s 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.184 13:29:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.184 13:29:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.184 13:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.184 13:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.184 13:29:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.184 ************************************ 00:10:17.184 START TEST nvmf_fio_target 00:10:17.184 ************************************ 00:10:17.185 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.185 * Looking for test storage... 00:10:17.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.185 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.185 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.185 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.444 --rc genhtml_branch_coverage=1 00:10:17.444 --rc genhtml_function_coverage=1 00:10:17.444 --rc genhtml_legend=1 00:10:17.444 --rc geninfo_all_blocks=1 00:10:17.444 --rc geninfo_unexecuted_blocks=1 00:10:17.444 00:10:17.444 ' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.444 --rc genhtml_branch_coverage=1 00:10:17.444 --rc genhtml_function_coverage=1 00:10:17.444 --rc genhtml_legend=1 00:10:17.444 --rc geninfo_all_blocks=1 00:10:17.444 --rc geninfo_unexecuted_blocks=1 00:10:17.444 00:10:17.444 ' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.444 --rc genhtml_branch_coverage=1 00:10:17.444 --rc genhtml_function_coverage=1 00:10:17.444 --rc genhtml_legend=1 00:10:17.444 --rc geninfo_all_blocks=1 00:10:17.444 --rc geninfo_unexecuted_blocks=1 00:10:17.444 00:10:17.444 ' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.444 --rc genhtml_branch_coverage=1 00:10:17.444 --rc genhtml_function_coverage=1 00:10:17.444 --rc genhtml_legend=1 00:10:17.444 --rc geninfo_all_blocks=1 00:10:17.444 --rc geninfo_unexecuted_blocks=1 00:10:17.444 00:10:17.444 ' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.444 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.445 Cannot find device "nvmf_init_br" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.445 Cannot find device "nvmf_init_br2" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.445 Cannot find device "nvmf_tgt_br" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.445 Cannot find device "nvmf_tgt_br2" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.445 Cannot find device "nvmf_init_br" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.445 Cannot find device "nvmf_init_br2" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.445 Cannot find device "nvmf_tgt_br" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:17.445 Cannot find device "nvmf_tgt_br2" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:17.445 Cannot find device "nvmf_br" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:17.445 Cannot find device "nvmf_init_if" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:17.445 Cannot find device "nvmf_init_if2" 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:17.445 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:17.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:17.706 00:10:17.706 --- 10.0.0.3 ping statistics --- 00:10:17.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.706 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:17.706 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:17.706 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:10:17.706 00:10:17.706 --- 10.0.0.4 ping statistics --- 00:10:17.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.706 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:17.706 00:10:17.706 --- 10.0.0.1 ping statistics --- 00:10:17.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.706 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:17.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:17.706 00:10:17.706 --- 10.0.0.2 ping statistics --- 00:10:17.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.706 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66409 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66409 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66409 ']' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.706 13:29:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.964 [2024-11-20 13:29:29.713896] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:17.964 [2024-11-20 13:29:29.714268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.964 [2024-11-20 13:29:29.867524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.223 [2024-11-20 13:29:29.934199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.223 [2024-11-20 13:29:29.934269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.223 [2024-11-20 13:29:29.934290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.223 [2024-11-20 13:29:29.934301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.223 [2024-11-20 13:29:29.934310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.223 [2024-11-20 13:29:29.935540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.223 [2024-11-20 13:29:29.935650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.223 [2024-11-20 13:29:29.935731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.223 [2024-11-20 13:29:29.935731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.223 [2024-11-20 13:29:29.993680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.223 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:18.788 [2024-11-20 13:29:30.440576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.788 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.079 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:19.079 13:29:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.337 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:19.337 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.595 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:19.595 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.160 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:20.160 13:29:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:20.418 13:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.676 13:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:20.676 13:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.933 13:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:20.933 13:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.498 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:21.498 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:21.498 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.064 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.064 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.064 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.064 13:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.322 13:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:22.580 [2024-11-20 13:29:34.499587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:22.580 13:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:22.837 13:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:23.095 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:23.353 13:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:25.349 13:29:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:25.349 [global] 00:10:25.349 thread=1 00:10:25.349 invalidate=1 00:10:25.349 rw=write 00:10:25.349 time_based=1 00:10:25.349 runtime=1 00:10:25.349 ioengine=libaio 00:10:25.349 direct=1 00:10:25.349 bs=4096 00:10:25.349 iodepth=1 00:10:25.349 norandommap=0 00:10:25.349 numjobs=1 00:10:25.349 00:10:25.349 verify_dump=1 00:10:25.349 verify_backlog=512 00:10:25.349 verify_state_save=0 00:10:25.349 do_verify=1 00:10:25.349 verify=crc32c-intel 00:10:25.349 [job0] 00:10:25.349 filename=/dev/nvme0n1 00:10:25.349 [job1] 00:10:25.349 filename=/dev/nvme0n2 00:10:25.349 [job2] 00:10:25.349 filename=/dev/nvme0n3 00:10:25.349 [job3] 00:10:25.349 filename=/dev/nvme0n4 00:10:25.349 Could not set queue depth (nvme0n1) 00:10:25.349 Could not set queue depth (nvme0n2) 00:10:25.349 Could not set queue depth (nvme0n3) 00:10:25.349 Could not set queue depth (nvme0n4) 00:10:25.607 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.607 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.607 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.607 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.607 fio-3.35 00:10:25.607 Starting 4 threads 00:10:26.981 00:10:26.981 job0: (groupid=0, jobs=1): err= 0: pid=66597: Wed Nov 20 13:29:38 2024 00:10:26.981 read: IOPS=1398, BW=5594KiB/s (5729kB/s)(5600KiB/1001msec) 00:10:26.981 slat (nsec): min=13920, max=71165, avg=26755.04, stdev=10307.07 00:10:26.981 clat (usec): min=192, max=1040, avg=373.74, stdev=101.78 00:10:26.981 lat (usec): min=208, max=1079, avg=400.49, stdev=108.74 00:10:26.981 clat percentiles (usec): 00:10:26.981 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:10:26.981 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 379], 00:10:26.981 | 70.00th=[ 437], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 553], 00:10:26.981 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 676], 99.95th=[ 1037], 00:10:26.981 | 99.99th=[ 1037] 00:10:26.981 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:26.981 slat (usec): min=20, max=160, avg=38.41, stdev=13.44 00:10:26.981 clat (usec): min=102, max=3048, avg=241.42, stdev=107.98 00:10:26.981 lat (usec): min=128, max=3091, avg=279.83, stdev=112.87 00:10:26.981 clat percentiles (usec): 00:10:26.981 | 1.00th=[ 115], 5.00th=[ 127], 10.00th=[ 137], 20.00th=[ 174], 00:10:26.981 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 241], 00:10:26.981 | 70.00th=[ 258], 80.00th=[ 302], 90.00th=[ 367], 95.00th=[ 383], 00:10:26.981 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 1303], 99.95th=[ 3064], 00:10:26.981 | 99.99th=[ 3064] 00:10:26.981 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.981 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.981 lat (usec) : 250=35.42%, 500=56.37%, 750=8.11% 00:10:26.981 lat (msec) : 2=0.07%, 4=0.03% 00:10:26.981 cpu : usr=1.90%, sys=7.90%, ctx=2936, majf=0, minf=19 00:10:26.981 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.981 issued rwts: total=1400,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.981 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.981 job1: (groupid=0, jobs=1): err= 0: pid=66598: Wed Nov 20 13:29:38 2024 00:10:26.981 read: IOPS=1741, BW=6965KiB/s (7132kB/s)(6972KiB/1001msec) 00:10:26.981 slat (nsec): min=8527, max=43325, avg=13733.02, stdev=3726.65 00:10:26.981 clat (usec): min=184, max=727, avg=279.75, stdev=36.91 00:10:26.981 lat (usec): min=201, max=743, avg=293.48, stdev=37.70 00:10:26.981 clat percentiles (usec): 00:10:26.981 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:10:26.981 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:10:26.982 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 330], 00:10:26.982 | 99.00th=[ 388], 99.50th=[ 502], 99.90th=[ 701], 99.95th=[ 725], 00:10:26.982 | 99.99th=[ 725] 00:10:26.982 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.982 slat (usec): min=11, max=128, avg=18.65, stdev= 6.00 00:10:26.982 clat (usec): min=130, max=919, avg=216.91, stdev=30.44 00:10:26.982 lat (usec): min=170, max=945, avg=235.56, stdev=31.71 00:10:26.982 clat percentiles (usec): 00:10:26.982 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:10:26.982 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:10:26.982 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 258], 00:10:26.982 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 523], 99.95th=[ 619], 00:10:26.982 | 99.99th=[ 922] 00:10:26.982 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.982 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.982 lat (usec) : 250=55.03%, 500=44.63%, 750=0.32%, 1000=0.03% 00:10:26.982 cpu : usr=1.40%, sys=5.40%, ctx=3792, majf=0, minf=3 00:10:26.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 issued rwts: total=1743,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.982 job2: (groupid=0, jobs=1): err= 0: pid=66599: Wed Nov 20 13:29:38 2024 00:10:26.982 read: IOPS=1757, BW=7029KiB/s (7198kB/s)(7036KiB/1001msec) 00:10:26.982 slat (nsec): min=12160, max=75790, avg=18016.31, stdev=6060.59 00:10:26.982 clat (usec): min=178, max=741, avg=301.44, stdev=69.55 00:10:26.982 lat (usec): min=192, max=753, avg=319.45, stdev=71.77 00:10:26.982 clat percentiles (usec): 00:10:26.982 | 1.00th=[ 198], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 260], 00:10:26.982 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:10:26.982 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 486], 00:10:26.982 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 685], 99.95th=[ 742], 00:10:26.982 | 99.99th=[ 742] 00:10:26.982 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.982 slat (usec): min=17, max=131, avg=24.55, stdev= 6.22 00:10:26.982 clat (usec): min=106, max=436, avg=185.68, stdev=43.45 00:10:26.982 lat (usec): min=128, max=555, avg=210.23, stdev=45.01 00:10:26.982 clat percentiles (usec): 00:10:26.982 | 1.00th=[ 117], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 143], 00:10:26.982 | 30.00th=[ 151], 40.00th=[ 165], 50.00th=[ 188], 60.00th=[ 202], 00:10:26.982 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 255], 00:10:26.982 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 424], 00:10:26.982 | 99.99th=[ 437] 00:10:26.982 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.982 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.982 lat (usec) : 250=56.63%, 500=41.40%, 750=1.97% 00:10:26.982 cpu : usr=2.00%, sys=6.30%, ctx=3818, majf=0, minf=7 00:10:26.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 issued rwts: total=1759,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.982 job3: (groupid=0, jobs=1): err= 0: pid=66600: Wed Nov 20 13:29:38 2024 00:10:26.982 read: IOPS=1741, BW=6965KiB/s (7132kB/s)(6972KiB/1001msec) 00:10:26.982 slat (nsec): min=8525, max=68191, avg=15415.07, stdev=6290.99 00:10:26.982 clat (usec): min=199, max=703, avg=278.05, stdev=35.47 00:10:26.982 lat (usec): min=223, max=728, avg=293.46, stdev=37.83 00:10:26.982 clat percentiles (usec): 00:10:26.982 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:10:26.982 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:26.982 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:10:26.982 | 99.00th=[ 392], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 701], 00:10:26.982 | 99.99th=[ 701] 00:10:26.982 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.982 slat (usec): min=11, max=119, avg=21.75, stdev= 6.22 00:10:26.982 clat (usec): min=134, max=915, avg=213.47, stdev=30.02 00:10:26.982 lat (usec): min=182, max=941, avg=235.22, stdev=31.28 00:10:26.982 clat percentiles (usec): 00:10:26.982 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:26.982 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:26.982 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 255], 00:10:26.982 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 510], 99.95th=[ 627], 00:10:26.982 | 99.99th=[ 914] 00:10:26.982 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.982 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.982 lat (usec) : 250=55.47%, 500=44.16%, 750=0.34%, 1000=0.03% 00:10:26.982 cpu : usr=0.80%, sys=7.00%, ctx=3792, majf=0, minf=9 00:10:26.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.982 issued rwts: total=1743,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.982 00:10:26.982 Run status group 0 (all jobs): 00:10:26.982 READ: bw=25.9MiB/s (27.2MB/s), 5594KiB/s-7029KiB/s (5729kB/s-7198kB/s), io=26.0MiB (27.2MB), run=1001-1001msec 00:10:26.982 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:10:26.982 00:10:26.982 Disk stats (read/write): 00:10:26.982 nvme0n1: ios=1154/1536, merge=0/0, ticks=435/400, in_queue=835, util=88.38% 00:10:26.982 nvme0n2: ios=1579/1685, merge=0/0, ticks=447/346, in_queue=793, util=88.75% 00:10:26.982 nvme0n3: ios=1536/1665, merge=0/0, ticks=472/340, in_queue=812, util=89.25% 00:10:26.982 nvme0n4: ios=1536/1683, merge=0/0, ticks=431/359, in_queue=790, util=89.70% 00:10:26.982 13:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.982 [global] 00:10:26.982 thread=1 00:10:26.982 invalidate=1 00:10:26.982 rw=randwrite 00:10:26.982 time_based=1 00:10:26.982 runtime=1 00:10:26.982 ioengine=libaio 00:10:26.982 direct=1 00:10:26.982 bs=4096 00:10:26.982 iodepth=1 00:10:26.982 norandommap=0 00:10:26.982 numjobs=1 00:10:26.982 00:10:26.982 verify_dump=1 00:10:26.982 verify_backlog=512 00:10:26.982 verify_state_save=0 00:10:26.982 do_verify=1 00:10:26.982 verify=crc32c-intel 00:10:26.982 [job0] 00:10:26.982 filename=/dev/nvme0n1 00:10:26.982 [job1] 00:10:26.982 filename=/dev/nvme0n2 00:10:26.982 [job2] 00:10:26.982 filename=/dev/nvme0n3 00:10:26.982 [job3] 00:10:26.982 filename=/dev/nvme0n4 00:10:26.982 Could not set queue depth (nvme0n1) 00:10:26.982 Could not set queue depth (nvme0n2) 00:10:26.982 Could not set queue depth (nvme0n3) 00:10:26.982 Could not set queue depth (nvme0n4) 00:10:26.982 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.982 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.982 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.982 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.982 fio-3.35 00:10:26.982 Starting 4 threads 00:10:28.360 00:10:28.360 job0: (groupid=0, jobs=1): err= 0: pid=66653: Wed Nov 20 13:29:39 2024 00:10:28.360 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:28.361 slat (nsec): min=10862, max=51271, avg=13142.38, stdev=2829.95 00:10:28.361 clat (usec): min=148, max=2369, avg=194.86, stdev=63.88 00:10:28.361 lat (usec): min=166, max=2384, avg=208.01, stdev=64.01 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:10:28.361 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:10:28.361 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 223], 00:10:28.361 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 750], 99.95th=[ 2278], 00:10:28.361 | 99.99th=[ 2376] 00:10:28.361 write: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 00:10:28.361 slat (nsec): min=14294, max=90964, avg=20103.35, stdev=3939.79 00:10:28.361 clat (usec): min=102, max=798, avg=136.37, stdev=19.51 00:10:28.361 lat (usec): min=120, max=819, avg=156.47, stdev=20.34 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 124], 00:10:28.361 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:10:28.361 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:10:28.361 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 217], 99.95th=[ 260], 00:10:28.361 | 99.99th=[ 799] 00:10:28.361 bw ( KiB/s): min=12288, max=12288, per=32.76%, avg=12288.00, stdev= 0.00, samples=1 00:10:28.361 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:28.361 lat (usec) : 250=99.58%, 500=0.31%, 750=0.04%, 1000=0.04% 00:10:28.361 lat (msec) : 4=0.04% 00:10:28.361 cpu : usr=1.90%, sys=7.60%, ctx=5485, majf=0, minf=5 00:10:28.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 issued rwts: total=2560,2924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.361 job1: (groupid=0, jobs=1): err= 0: pid=66654: Wed Nov 20 13:29:39 2024 00:10:28.361 read: IOPS=1922, BW=7688KiB/s (7873kB/s)(7696KiB/1001msec) 00:10:28.361 slat (nsec): min=7945, max=48956, avg=13062.76, stdev=3511.45 00:10:28.361 clat (usec): min=149, max=935, avg=300.51, stdev=85.34 00:10:28.361 lat (usec): min=162, max=959, avg=313.58, stdev=85.61 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 204], 00:10:28.361 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:10:28.361 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 420], 00:10:28.361 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 676], 99.95th=[ 938], 00:10:28.361 | 99.99th=[ 938] 00:10:28.361 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:28.361 slat (usec): min=11, max=109, avg=20.21, stdev= 8.24 00:10:28.361 clat (usec): min=105, max=319, avg=169.95, stdev=37.64 00:10:28.361 lat (usec): min=124, max=355, avg=190.16, stdev=35.75 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 115], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 139], 00:10:28.361 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 169], 00:10:28.361 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 243], 00:10:28.361 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 310], 00:10:28.361 | 99.99th=[ 318] 00:10:28.361 bw ( KiB/s): min= 9344, max= 9344, per=24.91%, avg=9344.00, stdev= 0.00, samples=1 00:10:28.361 iops : min= 2336, max= 2336, avg=2336.00, stdev= 0.00, samples=1 00:10:28.361 lat (usec) : 250=62.16%, 500=37.16%, 750=0.65%, 1000=0.03% 00:10:28.361 cpu : usr=1.20%, sys=6.00%, ctx=3973, majf=0, minf=12 00:10:28.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 issued rwts: total=1924,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.361 job2: (groupid=0, jobs=1): err= 0: pid=66655: Wed Nov 20 13:29:39 2024 00:10:28.361 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:28.361 slat (nsec): min=12735, max=86762, avg=17134.15, stdev=4737.18 00:10:28.361 clat (usec): min=165, max=712, avg=220.71, stdev=41.72 00:10:28.361 lat (usec): min=179, max=733, avg=237.84, stdev=43.24 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:10:28.361 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:28.361 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 258], 95.00th=[ 314], 00:10:28.361 | 99.00th=[ 363], 99.50th=[ 424], 99.90th=[ 537], 99.95th=[ 586], 00:10:28.361 | 99.99th=[ 709] 00:10:28.361 write: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec); 0 zone resets 00:10:28.361 slat (usec): min=14, max=113, avg=24.09, stdev= 5.89 00:10:28.361 clat (usec): min=113, max=857, avg=175.88, stdev=53.61 00:10:28.361 lat (usec): min=135, max=895, avg=199.96, stdev=56.71 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 141], 00:10:28.361 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:10:28.361 | 70.00th=[ 174], 80.00th=[ 223], 90.00th=[ 255], 95.00th=[ 277], 00:10:28.361 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 478], 99.95th=[ 498], 00:10:28.361 | 99.99th=[ 857] 00:10:28.361 bw ( KiB/s): min= 8192, max= 8192, per=21.84%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.361 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.361 lat (usec) : 250=88.15%, 500=11.71%, 750=0.11%, 1000=0.02% 00:10:28.361 cpu : usr=1.30%, sys=8.20%, ctx=4567, majf=0, minf=11 00:10:28.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 issued rwts: total=2048,2519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.361 job3: (groupid=0, jobs=1): err= 0: pid=66656: Wed Nov 20 13:29:39 2024 00:10:28.361 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:28.361 slat (nsec): min=8410, max=55599, avg=14687.46, stdev=4971.33 00:10:28.361 clat (usec): min=197, max=7836, avg=342.93, stdev=252.94 00:10:28.361 lat (usec): min=209, max=7860, avg=357.62, stdev=253.61 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:10:28.361 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 347], 00:10:28.361 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 408], 00:10:28.361 | 99.00th=[ 474], 99.50th=[ 848], 99.90th=[ 3458], 99.95th=[ 7832], 00:10:28.361 | 99.99th=[ 7832] 00:10:28.361 write: IOPS=1894, BW=7576KiB/s (7758kB/s)(7584KiB/1001msec); 0 zone resets 00:10:28.361 slat (usec): min=11, max=441, avg=22.63, stdev=15.03 00:10:28.361 clat (usec): min=117, max=534, avg=211.53, stdev=49.22 00:10:28.361 lat (usec): min=144, max=656, avg=234.16, stdev=56.38 00:10:28.361 clat percentiles (usec): 00:10:28.361 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 169], 00:10:28.361 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 212], 00:10:28.361 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 297], 00:10:28.361 | 99.00th=[ 367], 99.50th=[ 412], 99.90th=[ 490], 99.95th=[ 537], 00:10:28.361 | 99.99th=[ 537] 00:10:28.361 bw ( KiB/s): min= 8192, max= 8192, per=21.84%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.361 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.361 lat (usec) : 250=44.87%, 500=54.69%, 750=0.17%, 1000=0.09% 00:10:28.361 lat (msec) : 4=0.15%, 10=0.03% 00:10:28.361 cpu : usr=1.20%, sys=5.90%, ctx=3441, majf=0, minf=21 00:10:28.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.361 issued rwts: total=1536,1896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.361 00:10:28.361 Run status group 0 (all jobs): 00:10:28.361 READ: bw=31.5MiB/s (33.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.5MiB (33.0MB), run=1001-1001msec 00:10:28.361 WRITE: bw=36.6MiB/s (38.4MB/s), 7576KiB/s-11.4MiB/s (7758kB/s-12.0MB/s), io=36.7MiB (38.4MB), run=1001-1001msec 00:10:28.361 00:10:28.361 Disk stats (read/write): 00:10:28.361 nvme0n1: ios=2243/2560, merge=0/0, ticks=472/362, in_queue=834, util=89.18% 00:10:28.361 nvme0n2: ios=1591/2048, merge=0/0, ticks=472/343, in_queue=815, util=89.19% 00:10:28.361 nvme0n3: ios=1843/2048, merge=0/0, ticks=443/391, in_queue=834, util=89.53% 00:10:28.361 nvme0n4: ios=1356/1536, merge=0/0, ticks=460/323, in_queue=783, util=88.74% 00:10:28.361 13:29:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:28.361 [global] 00:10:28.361 thread=1 00:10:28.361 invalidate=1 00:10:28.361 rw=write 00:10:28.361 time_based=1 00:10:28.361 runtime=1 00:10:28.361 ioengine=libaio 00:10:28.361 direct=1 00:10:28.361 bs=4096 00:10:28.361 iodepth=128 00:10:28.361 norandommap=0 00:10:28.361 numjobs=1 00:10:28.361 00:10:28.361 verify_dump=1 00:10:28.361 verify_backlog=512 00:10:28.361 verify_state_save=0 00:10:28.361 do_verify=1 00:10:28.361 verify=crc32c-intel 00:10:28.361 [job0] 00:10:28.361 filename=/dev/nvme0n1 00:10:28.362 [job1] 00:10:28.362 filename=/dev/nvme0n2 00:10:28.362 [job2] 00:10:28.362 filename=/dev/nvme0n3 00:10:28.362 [job3] 00:10:28.362 filename=/dev/nvme0n4 00:10:28.362 Could not set queue depth (nvme0n1) 00:10:28.362 Could not set queue depth (nvme0n2) 00:10:28.362 Could not set queue depth (nvme0n3) 00:10:28.362 Could not set queue depth (nvme0n4) 00:10:28.362 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.362 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.362 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.362 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.362 fio-3.35 00:10:28.362 Starting 4 threads 00:10:29.737 00:10:29.737 job0: (groupid=0, jobs=1): err= 0: pid=66715: Wed Nov 20 13:29:41 2024 00:10:29.737 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:29.737 slat (usec): min=3, max=17793, avg=265.77, stdev=1250.62 00:10:29.737 clat (usec): min=18739, max=60004, avg=33826.60, stdev=9693.29 00:10:29.737 lat (usec): min=18760, max=60023, avg=34092.36, stdev=9720.85 00:10:29.737 clat percentiles (usec): 00:10:29.737 | 1.00th=[20055], 5.00th=[21103], 10.00th=[21890], 20.00th=[23987], 00:10:29.737 | 30.00th=[26346], 40.00th=[30016], 50.00th=[33424], 60.00th=[36963], 00:10:29.737 | 70.00th=[38011], 80.00th=[40109], 90.00th=[49021], 95.00th=[52691], 00:10:29.737 | 99.00th=[56361], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:10:29.737 | 99.99th=[60031] 00:10:29.737 write: IOPS=2205, BW=8821KiB/s (9032kB/s)(8856KiB/1004msec); 0 zone resets 00:10:29.737 slat (usec): min=7, max=9552, avg=198.54, stdev=856.28 00:10:29.737 clat (usec): min=1041, max=51787, avg=25666.45, stdev=6201.98 00:10:29.737 lat (usec): min=4955, max=51810, avg=25864.99, stdev=6181.93 00:10:29.737 clat percentiles (usec): 00:10:29.737 | 1.00th=[ 5473], 5.00th=[17695], 10.00th=[19268], 20.00th=[21627], 00:10:29.737 | 30.00th=[22938], 40.00th=[23987], 50.00th=[24773], 60.00th=[26084], 00:10:29.737 | 70.00th=[28705], 80.00th=[30016], 90.00th=[33162], 95.00th=[34341], 00:10:29.737 | 99.00th=[44303], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:10:29.737 | 99.99th=[51643] 00:10:29.737 bw ( KiB/s): min= 5560, max=11128, per=17.57%, avg=8344.00, stdev=3937.17, samples=2 00:10:29.737 iops : min= 1390, max= 2782, avg=2086.00, stdev=984.29, samples=2 00:10:29.737 lat (msec) : 2=0.02%, 10=0.84%, 20=6.34%, 50=88.85%, 100=3.94% 00:10:29.737 cpu : usr=2.69%, sys=6.18%, ctx=546, majf=0, minf=6 00:10:29.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:29.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.738 issued rwts: total=2048,2214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.738 job1: (groupid=0, jobs=1): err= 0: pid=66716: Wed Nov 20 13:29:41 2024 00:10:29.738 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:10:29.738 slat (usec): min=6, max=13174, avg=256.74, stdev=1163.02 00:10:29.738 clat (usec): min=4694, max=53479, avg=31888.52, stdev=11073.20 00:10:29.738 lat (usec): min=7780, max=53495, avg=32145.26, stdev=11105.84 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[13173], 5.00th=[17695], 10.00th=[18744], 20.00th=[19530], 00:10:29.738 | 30.00th=[22414], 40.00th=[26346], 50.00th=[33162], 60.00th=[37487], 00:10:29.738 | 70.00th=[38011], 80.00th=[42730], 90.00th=[46924], 95.00th=[50594], 00:10:29.738 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53740], 00:10:29.738 | 99.99th=[53740] 00:10:29.738 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:10:29.738 slat (usec): min=12, max=7303, avg=222.22, stdev=930.80 00:10:29.738 clat (usec): min=17644, max=55566, avg=29962.25, stdev=8752.92 00:10:29.738 lat (usec): min=17888, max=55598, avg=30184.47, stdev=8787.80 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[18220], 5.00th=[19006], 10.00th=[19530], 20.00th=[20579], 00:10:29.738 | 30.00th=[24511], 40.00th=[25560], 50.00th=[29754], 60.00th=[30540], 00:10:29.738 | 70.00th=[32900], 80.00th=[36439], 90.00th=[42730], 95.00th=[46924], 00:10:29.738 | 99.00th=[52691], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:29.738 | 99.99th=[55313] 00:10:29.738 bw ( KiB/s): min= 5928, max=10456, per=17.25%, avg=8192.00, stdev=3201.78, samples=2 00:10:29.738 iops : min= 1482, max= 2614, avg=2048.00, stdev=800.44, samples=2 00:10:29.738 lat (msec) : 10=0.12%, 20=19.19%, 50=76.20%, 100=4.49% 00:10:29.738 cpu : usr=2.29%, sys=6.37%, ctx=280, majf=0, minf=1 00:10:29.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:29.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.738 issued rwts: total=2048,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.738 job2: (groupid=0, jobs=1): err= 0: pid=66717: Wed Nov 20 13:29:41 2024 00:10:29.738 read: IOPS=2736, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1006msec) 00:10:29.738 slat (usec): min=3, max=12349, avg=189.71, stdev=1036.73 00:10:29.738 clat (usec): min=563, max=49958, avg=23625.65, stdev=7813.17 00:10:29.738 lat (usec): min=5589, max=49972, avg=23815.36, stdev=7799.92 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[ 6063], 5.00th=[14484], 10.00th=[16057], 20.00th=[16712], 00:10:29.738 | 30.00th=[18482], 40.00th=[22938], 50.00th=[24511], 60.00th=[25297], 00:10:29.738 | 70.00th=[25560], 80.00th=[26084], 90.00th=[31589], 95.00th=[44303], 00:10:29.738 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:10:29.738 | 99.99th=[50070] 00:10:29.738 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:29.738 slat (usec): min=11, max=7002, avg=148.50, stdev=740.88 00:10:29.738 clat (usec): min=10685, max=30897, avg=19961.12, stdev=4983.30 00:10:29.738 lat (usec): min=13140, max=30942, avg=20109.63, stdev=4961.15 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[13173], 5.00th=[13566], 10.00th=[13829], 20.00th=[15664], 00:10:29.738 | 30.00th=[16712], 40.00th=[17171], 50.00th=[19006], 60.00th=[20579], 00:10:29.738 | 70.00th=[22414], 80.00th=[25297], 90.00th=[27657], 95.00th=[30016], 00:10:29.738 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:10:29.738 | 99.99th=[30802] 00:10:29.738 bw ( KiB/s): min=12288, max=12288, per=25.88%, avg=12288.00, stdev= 0.00, samples=2 00:10:29.738 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:29.738 lat (usec) : 750=0.02% 00:10:29.738 lat (msec) : 10=0.55%, 20=43.50%, 50=55.93% 00:10:29.738 cpu : usr=3.58%, sys=8.36%, ctx=183, majf=0, minf=5 00:10:29.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:29.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.738 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.738 job3: (groupid=0, jobs=1): err= 0: pid=66718: Wed Nov 20 13:29:41 2024 00:10:29.738 read: IOPS=4512, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:10:29.738 slat (usec): min=5, max=5730, avg=110.14, stdev=428.56 00:10:29.738 clat (usec): min=902, max=33654, avg=14174.32, stdev=5888.66 00:10:29.738 lat (usec): min=914, max=33681, avg=14284.46, stdev=5931.70 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[ 4621], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11076], 00:10:29.738 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:29.738 | 70.00th=[12649], 80.00th=[20317], 90.00th=[24511], 95.00th=[26608], 00:10:29.738 | 99.00th=[30802], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:10:29.738 | 99.99th=[33817] 00:10:29.738 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:29.738 slat (usec): min=12, max=6288, avg=100.75, stdev=401.33 00:10:29.738 clat (usec): min=8205, max=27143, avg=13466.25, stdev=3723.33 00:10:29.738 lat (usec): min=8224, max=27165, avg=13567.00, stdev=3760.05 00:10:29.738 clat percentiles (usec): 00:10:29.738 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:10:29.738 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11731], 60.00th=[12518], 00:10:29.738 | 70.00th=[14222], 80.00th=[16712], 90.00th=[19006], 95.00th=[22152], 00:10:29.738 | 99.00th=[24773], 99.50th=[25035], 99.90th=[26084], 99.95th=[26346], 00:10:29.738 | 99.99th=[27132] 00:10:29.738 bw ( KiB/s): min=15288, max=21576, per=38.82%, avg=18432.00, stdev=4446.29, samples=2 00:10:29.738 iops : min= 3822, max= 5394, avg=4608.00, stdev=1111.57, samples=2 00:10:29.738 lat (usec) : 1000=0.07% 00:10:29.738 lat (msec) : 2=0.01%, 4=0.22%, 10=5.45%, 20=80.22%, 50=14.03% 00:10:29.738 cpu : usr=3.70%, sys=14.49%, ctx=710, majf=0, minf=1 00:10:29.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:29.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.738 issued rwts: total=4522,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.738 00:10:29.738 Run status group 0 (all jobs): 00:10:29.738 READ: bw=44.2MiB/s (46.3MB/s), 8151KiB/s-17.6MiB/s (8347kB/s-18.5MB/s), io=44.4MiB (46.6MB), run=1002-1006msec 00:10:29.738 WRITE: bw=46.4MiB/s (48.6MB/s), 8151KiB/s-18.0MiB/s (8347kB/s-18.8MB/s), io=46.6MiB (48.9MB), run=1002-1006msec 00:10:29.738 00:10:29.738 Disk stats (read/write): 00:10:29.738 nvme0n1: ios=1832/2048, merge=0/0, ticks=16056/13744, in_queue=29800, util=88.16% 00:10:29.738 nvme0n2: ios=1709/2048, merge=0/0, ticks=14042/15422, in_queue=29464, util=89.38% 00:10:29.738 nvme0n3: ios=2406/2560, merge=0/0, ticks=13931/11452, in_queue=25383, util=89.51% 00:10:29.738 nvme0n4: ios=3584/4079, merge=0/0, ticks=17237/16183, in_queue=33420, util=89.76% 00:10:29.738 13:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.738 [global] 00:10:29.738 thread=1 00:10:29.738 invalidate=1 00:10:29.738 rw=randwrite 00:10:29.738 time_based=1 00:10:29.738 runtime=1 00:10:29.738 ioengine=libaio 00:10:29.738 direct=1 00:10:29.738 bs=4096 00:10:29.738 iodepth=128 00:10:29.738 norandommap=0 00:10:29.738 numjobs=1 00:10:29.738 00:10:29.738 verify_dump=1 00:10:29.738 verify_backlog=512 00:10:29.738 verify_state_save=0 00:10:29.738 do_verify=1 00:10:29.738 verify=crc32c-intel 00:10:29.738 [job0] 00:10:29.738 filename=/dev/nvme0n1 00:10:29.738 [job1] 00:10:29.738 filename=/dev/nvme0n2 00:10:29.738 [job2] 00:10:29.738 filename=/dev/nvme0n3 00:10:29.738 [job3] 00:10:29.738 filename=/dev/nvme0n4 00:10:29.738 Could not set queue depth (nvme0n1) 00:10:29.738 Could not set queue depth (nvme0n2) 00:10:29.738 Could not set queue depth (nvme0n3) 00:10:29.738 Could not set queue depth (nvme0n4) 00:10:29.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.738 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.738 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.738 fio-3.35 00:10:29.738 Starting 4 threads 00:10:31.123 00:10:31.123 job0: (groupid=0, jobs=1): err= 0: pid=66779: Wed Nov 20 13:29:42 2024 00:10:31.123 read: IOPS=2330, BW=9320KiB/s (9544kB/s)(9348KiB/1003msec) 00:10:31.123 slat (usec): min=6, max=18884, avg=235.62, stdev=1467.02 00:10:31.123 clat (usec): min=1518, max=76217, avg=30450.37, stdev=18215.46 00:10:31.123 lat (usec): min=6050, max=76235, avg=30686.00, stdev=18291.82 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 6456], 5.00th=[13698], 10.00th=[14484], 20.00th=[15795], 00:10:31.123 | 30.00th=[17433], 40.00th=[19006], 50.00th=[20317], 60.00th=[22414], 00:10:31.123 | 70.00th=[40633], 80.00th=[51643], 90.00th=[55313], 95.00th=[69731], 00:10:31.123 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:10:31.123 | 99.99th=[76022] 00:10:31.123 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:31.123 slat (usec): min=10, max=19303, avg=167.96, stdev=996.82 00:10:31.123 clat (usec): min=7615, max=49750, avg=21376.34, stdev=10977.94 00:10:31.123 lat (usec): min=9852, max=52386, avg=21544.30, stdev=11023.93 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11207], 20.00th=[12518], 00:10:31.123 | 30.00th=[13173], 40.00th=[13435], 50.00th=[16188], 60.00th=[20579], 00:10:31.123 | 70.00th=[25822], 80.00th=[31327], 90.00th=[36439], 95.00th=[46400], 00:10:31.123 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:10:31.123 | 99.99th=[49546] 00:10:31.123 bw ( KiB/s): min= 7432, max=13048, per=26.26%, avg=10240.00, stdev=3971.11, samples=2 00:10:31.123 iops : min= 1858, max= 3262, avg=2560.00, stdev=992.78, samples=2 00:10:31.123 lat (msec) : 2=0.02%, 10=1.37%, 20=52.99%, 50=33.49%, 100=12.13% 00:10:31.123 cpu : usr=2.79%, sys=6.39%, ctx=154, majf=0, minf=5 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 job1: (groupid=0, jobs=1): err= 0: pid=66780: Wed Nov 20 13:29:42 2024 00:10:31.123 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:31.123 slat (usec): min=5, max=7341, avg=143.45, stdev=571.52 00:10:31.123 clat (usec): min=8729, max=66288, avg=18773.53, stdev=12496.54 00:10:31.123 lat (usec): min=8747, max=67351, avg=18916.97, stdev=12600.58 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 9765], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[11600], 00:10:31.123 | 30.00th=[12256], 40.00th=[13173], 50.00th=[14484], 60.00th=[15139], 00:10:31.123 | 70.00th=[17171], 80.00th=[18482], 90.00th=[42730], 95.00th=[50594], 00:10:31.123 | 99.00th=[61080], 99.50th=[63701], 99.90th=[64226], 99.95th=[66323], 00:10:31.123 | 99.99th=[66323] 00:10:31.123 write: IOPS=3646, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1005msec); 0 zone resets 00:10:31.123 slat (usec): min=8, max=11617, avg=124.73, stdev=617.10 00:10:31.123 clat (usec): min=3238, max=58822, avg=16303.24, stdev=8980.69 00:10:31.123 lat (usec): min=6187, max=58844, avg=16427.98, stdev=9042.57 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10945], 00:10:31.123 | 30.00th=[11338], 40.00th=[12125], 50.00th=[14222], 60.00th=[15401], 00:10:31.123 | 70.00th=[16057], 80.00th=[17695], 90.00th=[24773], 95.00th=[41157], 00:10:31.123 | 99.00th=[51643], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:10:31.123 | 99.99th=[58983] 00:10:31.123 bw ( KiB/s): min=10640, max=18068, per=36.80%, avg=14354.00, stdev=5252.39, samples=2 00:10:31.123 iops : min= 2660, max= 4517, avg=3588.50, stdev=1313.10, samples=2 00:10:31.123 lat (msec) : 4=0.01%, 10=10.93%, 20=75.24%, 50=10.64%, 100=3.19% 00:10:31.123 cpu : usr=3.29%, sys=10.36%, ctx=476, majf=0, minf=8 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=3584,3665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 job2: (groupid=0, jobs=1): err= 0: pid=66781: Wed Nov 20 13:29:42 2024 00:10:31.123 read: IOPS=1581, BW=6326KiB/s (6478kB/s)(6364KiB/1006msec) 00:10:31.123 slat (usec): min=6, max=19355, avg=314.92, stdev=1656.04 00:10:31.123 clat (usec): min=3501, max=63936, avg=38780.79, stdev=13020.55 00:10:31.123 lat (usec): min=6377, max=63948, avg=39095.70, stdev=13023.88 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 8717], 5.00th=[21627], 10.00th=[21627], 20.00th=[26346], 00:10:31.123 | 30.00th=[29230], 40.00th=[32375], 50.00th=[35914], 60.00th=[41157], 00:10:31.123 | 70.00th=[48497], 80.00th=[51643], 90.00th=[56886], 95.00th=[60031], 00:10:31.123 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:10:31.123 | 99.99th=[63701] 00:10:31.123 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:10:31.124 slat (usec): min=11, max=10961, avg=237.20, stdev=1087.80 00:10:31.124 clat (usec): min=13272, max=55962, avg=31428.52, stdev=11157.80 00:10:31.124 lat (usec): min=15856, max=56012, avg=31665.72, stdev=11182.01 00:10:31.124 clat percentiles (usec): 00:10:31.124 | 1.00th=[15926], 5.00th=[16188], 10.00th=[16909], 20.00th=[20841], 00:10:31.124 | 30.00th=[22414], 40.00th=[26346], 50.00th=[30802], 60.00th=[33162], 00:10:31.124 | 70.00th=[36963], 80.00th=[43254], 90.00th=[47449], 95.00th=[50594], 00:10:31.124 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:10:31.124 | 99.99th=[55837] 00:10:31.124 bw ( KiB/s): min= 7608, max= 8192, per=20.26%, avg=7900.00, stdev=412.95, samples=2 00:10:31.124 iops : min= 1902, max= 2048, avg=1975.00, stdev=103.24, samples=2 00:10:31.124 lat (msec) : 4=0.03%, 10=0.60%, 20=8.99%, 50=74.91%, 100=15.47% 00:10:31.124 cpu : usr=1.59%, sys=5.47%, ctx=267, majf=0, minf=13 00:10:31.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:31.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.124 issued rwts: total=1591,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.124 job3: (groupid=0, jobs=1): err= 0: pid=66782: Wed Nov 20 13:29:42 2024 00:10:31.124 read: IOPS=1407, BW=5632KiB/s (5767kB/s)(5660KiB/1005msec) 00:10:31.124 slat (usec): min=6, max=18515, avg=375.94, stdev=1864.24 00:10:31.124 clat (usec): min=3717, max=76305, avg=48528.88, stdev=11845.39 00:10:31.124 lat (usec): min=7046, max=76327, avg=48904.82, stdev=11761.45 00:10:31.124 clat percentiles (usec): 00:10:31.124 | 1.00th=[ 9110], 5.00th=[29492], 10.00th=[36963], 20.00th=[40109], 00:10:31.124 | 30.00th=[43779], 40.00th=[47449], 50.00th=[50070], 60.00th=[51643], 00:10:31.124 | 70.00th=[52167], 80.00th=[54264], 90.00th=[61080], 95.00th=[70779], 00:10:31.124 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:10:31.124 | 99.99th=[76022] 00:10:31.124 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:10:31.124 slat (usec): min=5, max=19362, avg=296.78, stdev=1337.89 00:10:31.124 clat (usec): min=18514, max=56796, avg=37367.67, stdev=9127.11 00:10:31.124 lat (usec): min=24011, max=59200, avg=37664.46, stdev=9134.75 00:10:31.124 clat percentiles (usec): 00:10:31.124 | 1.00th=[23987], 5.00th=[25297], 10.00th=[25297], 20.00th=[26870], 00:10:31.124 | 30.00th=[31327], 40.00th=[33162], 50.00th=[36439], 60.00th=[41157], 00:10:31.124 | 70.00th=[43779], 80.00th=[46400], 90.00th=[50070], 95.00th=[52691], 00:10:31.124 | 99.00th=[55837], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:31.124 | 99.99th=[56886] 00:10:31.124 bw ( KiB/s): min= 4872, max= 7430, per=15.77%, avg=6151.00, stdev=1808.78, samples=2 00:10:31.124 iops : min= 1218, max= 1857, avg=1537.50, stdev=451.84, samples=2 00:10:31.124 lat (msec) : 4=0.03%, 10=0.47%, 20=1.05%, 50=69.16%, 100=29.28% 00:10:31.124 cpu : usr=1.49%, sys=5.08%, ctx=322, majf=0, minf=13 00:10:31.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:10:31.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.124 issued rwts: total=1415,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.124 00:10:31.124 Run status group 0 (all jobs): 00:10:31.124 READ: bw=34.7MiB/s (36.3MB/s), 5632KiB/s-13.9MiB/s (5767kB/s-14.6MB/s), io=34.9MiB (36.6MB), run=1003-1006msec 00:10:31.124 WRITE: bw=38.1MiB/s (39.9MB/s), 6113KiB/s-14.2MiB/s (6260kB/s-14.9MB/s), io=38.3MiB (40.2MB), run=1003-1006msec 00:10:31.124 00:10:31.124 Disk stats (read/write): 00:10:31.124 nvme0n1: ios=1714/2048, merge=0/0, ticks=14019/9919, in_queue=23938, util=86.36% 00:10:31.124 nvme0n2: ios=3251/3584, merge=0/0, ticks=15355/15438, in_queue=30793, util=87.53% 00:10:31.124 nvme0n3: ios=1536/1608, merge=0/0, ticks=15002/10532, in_queue=25534, util=88.81% 00:10:31.124 nvme0n4: ios=1024/1480, merge=0/0, ticks=12771/12999, in_queue=25770, util=89.56% 00:10:31.124 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.124 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66796 00:10:31.124 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.124 13:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.124 [global] 00:10:31.124 thread=1 00:10:31.124 invalidate=1 00:10:31.124 rw=read 00:10:31.124 time_based=1 00:10:31.124 runtime=10 00:10:31.124 ioengine=libaio 00:10:31.124 direct=1 00:10:31.124 bs=4096 00:10:31.124 iodepth=1 00:10:31.124 norandommap=1 00:10:31.124 numjobs=1 00:10:31.124 00:10:31.124 [job0] 00:10:31.124 filename=/dev/nvme0n1 00:10:31.124 [job1] 00:10:31.124 filename=/dev/nvme0n2 00:10:31.124 [job2] 00:10:31.124 filename=/dev/nvme0n3 00:10:31.124 [job3] 00:10:31.124 filename=/dev/nvme0n4 00:10:31.124 Could not set queue depth (nvme0n1) 00:10:31.124 Could not set queue depth (nvme0n2) 00:10:31.124 Could not set queue depth (nvme0n3) 00:10:31.124 Could not set queue depth (nvme0n4) 00:10:31.124 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.124 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.124 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.124 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.124 fio-3.35 00:10:31.124 Starting 4 threads 00:10:34.432 13:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.432 fio: pid=66839, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.432 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39858176, buflen=4096 00:10:34.432 13:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.432 fio: pid=66838, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.432 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44281856, buflen=4096 00:10:34.692 13:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.692 13:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.951 fio: pid=66836, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.951 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58605568, buflen=4096 00:10:34.951 13:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.951 13:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:35.210 fio: pid=66837, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.210 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=67096576, buflen=4096 00:10:35.210 00:10:35.210 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66836: Wed Nov 20 13:29:47 2024 00:10:35.210 read: IOPS=4062, BW=15.9MiB/s (16.6MB/s)(55.9MiB/3522msec) 00:10:35.210 slat (usec): min=7, max=16968, avg=14.58, stdev=192.50 00:10:35.210 clat (usec): min=62, max=7760, avg=230.36, stdev=103.63 00:10:35.210 lat (usec): min=141, max=17260, avg=244.94, stdev=219.15 00:10:35.210 clat percentiles (usec): 00:10:35.210 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 178], 20.00th=[ 215], 00:10:35.210 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:35.210 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:10:35.210 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 725], 99.95th=[ 1844], 00:10:35.210 | 99.99th=[ 7308] 00:10:35.210 bw ( KiB/s): min=15160, max=18440, per=30.44%, avg=16220.00, stdev=1131.47, samples=6 00:10:35.210 iops : min= 3790, max= 4610, avg=4055.00, stdev=282.87, samples=6 00:10:35.210 lat (usec) : 100=0.01%, 250=82.05%, 500=17.80%, 750=0.04%, 1000=0.03% 00:10:35.210 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:10:35.210 cpu : usr=0.80%, sys=4.74%, ctx=14326, majf=0, minf=1 00:10:35.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 issued rwts: total=14309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.210 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66837: Wed Nov 20 13:29:47 2024 00:10:35.210 read: IOPS=4259, BW=16.6MiB/s (17.4MB/s)(64.0MiB/3846msec) 00:10:35.210 slat (usec): min=7, max=15788, avg=15.84, stdev=209.56 00:10:35.210 clat (usec): min=37, max=3038, avg=217.62, stdev=52.65 00:10:35.210 lat (usec): min=139, max=16122, avg=233.46, stdev=216.76 00:10:35.210 clat percentiles (usec): 00:10:35.210 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 169], 00:10:35.210 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:10:35.210 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:10:35.210 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 490], 99.95th=[ 799], 00:10:35.210 | 99.99th=[ 2147] 00:10:35.210 bw ( KiB/s): min=15856, max=20224, per=31.19%, avg=16617.29, stdev=1595.57, samples=7 00:10:35.210 iops : min= 3964, max= 5056, avg=4154.29, stdev=398.90, samples=7 00:10:35.210 lat (usec) : 50=0.01%, 250=86.25%, 500=13.65%, 750=0.03%, 1000=0.01% 00:10:35.210 lat (msec) : 2=0.03%, 4=0.01% 00:10:35.210 cpu : usr=1.30%, sys=4.86%, ctx=16405, majf=0, minf=1 00:10:35.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 issued rwts: total=16382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.210 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66838: Wed Nov 20 13:29:47 2024 00:10:35.210 read: IOPS=3337, BW=13.0MiB/s (13.7MB/s)(42.2MiB/3240msec) 00:10:35.210 slat (usec): min=11, max=11037, avg=20.51, stdev=141.71 00:10:35.210 clat (usec): min=147, max=2135, avg=277.48, stdev=67.27 00:10:35.210 lat (usec): min=163, max=11248, avg=297.99, stdev=156.66 00:10:35.210 clat percentiles (usec): 00:10:35.210 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 260], 00:10:35.210 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:10:35.210 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 334], 00:10:35.210 | 99.00th=[ 441], 99.50th=[ 523], 99.90th=[ 938], 99.95th=[ 1598], 00:10:35.210 | 99.99th=[ 2114] 00:10:35.210 bw ( KiB/s): min=12648, max=13640, per=24.46%, avg=13033.33, stdev=362.19, samples=6 00:10:35.210 iops : min= 3162, max= 3410, avg=3258.33, stdev=90.55, samples=6 00:10:35.210 lat (usec) : 250=14.23%, 500=85.20%, 750=0.42%, 1000=0.05% 00:10:35.210 lat (msec) : 2=0.06%, 4=0.03% 00:10:35.210 cpu : usr=1.05%, sys=5.46%, ctx=10815, majf=0, minf=2 00:10:35.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 issued rwts: total=10812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.210 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66839: Wed Nov 20 13:29:47 2024 00:10:35.210 read: IOPS=3284, BW=12.8MiB/s (13.5MB/s)(38.0MiB/2963msec) 00:10:35.210 slat (usec): min=11, max=277, avg=18.28, stdev= 7.78 00:10:35.210 clat (usec): min=147, max=1632, avg=284.59, stdev=39.73 00:10:35.210 lat (usec): min=161, max=1648, avg=302.87, stdev=39.81 00:10:35.210 clat percentiles (usec): 00:10:35.210 | 1.00th=[ 194], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 265], 00:10:35.210 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:10:35.210 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:10:35.210 | 99.00th=[ 396], 99.50th=[ 474], 99.90th=[ 693], 99.95th=[ 873], 00:10:35.210 | 99.99th=[ 1631] 00:10:35.210 bw ( KiB/s): min=12784, max=13440, per=24.69%, avg=13153.60, stdev=253.51, samples=5 00:10:35.210 iops : min= 3196, max= 3360, avg=3288.40, stdev=63.38, samples=5 00:10:35.210 lat (usec) : 250=7.15%, 500=92.43%, 750=0.35%, 1000=0.04% 00:10:35.210 lat (msec) : 2=0.02% 00:10:35.210 cpu : usr=0.98%, sys=5.33%, ctx=9754, majf=0, minf=1 00:10:35.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.210 issued rwts: total=9732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.210 00:10:35.210 Run status group 0 (all jobs): 00:10:35.210 READ: bw=52.0MiB/s (54.6MB/s), 12.8MiB/s-16.6MiB/s (13.5MB/s-17.4MB/s), io=200MiB (210MB), run=2963-3846msec 00:10:35.210 00:10:35.210 Disk stats (read/write): 00:10:35.210 nvme0n1: ios=13642/0, merge=0/0, ticks=3008/0, in_queue=3008, util=94.93% 00:10:35.210 nvme0n2: ios=14994/0, merge=0/0, ticks=3302/0, in_queue=3302, util=95.26% 00:10:35.210 nvme0n3: ios=10200/0, merge=0/0, ticks=2933/0, in_queue=2933, util=96.24% 00:10:35.210 nvme0n4: ios=9413/0, merge=0/0, ticks=2695/0, in_queue=2695, util=96.73% 00:10:35.210 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.210 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:35.469 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.469 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.727 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.727 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:36.293 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.293 13:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:36.552 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.552 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66796 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.810 nvmf hotplug test: fio failed as expected 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.810 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.069 rmmod nvme_tcp 00:10:37.069 rmmod nvme_fabrics 00:10:37.069 rmmod nvme_keyring 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66409 ']' 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66409 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66409 ']' 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66409 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.069 13:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66409 00:10:37.069 killing process with pid 66409 00:10:37.069 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.069 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.069 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66409' 00:10:37.069 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66409 00:10:37.069 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66409 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:37.343 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:37.601 ************************************ 00:10:37.601 END TEST nvmf_fio_target 00:10:37.601 ************************************ 00:10:37.601 00:10:37.601 real 0m20.458s 00:10:37.601 user 1m17.826s 00:10:37.601 sys 0m9.679s 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.601 ************************************ 00:10:37.601 START TEST nvmf_bdevio 00:10:37.601 ************************************ 00:10:37.601 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.933 * Looking for test storage... 00:10:37.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:10:37.933 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.934 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:37.934 Cannot find device "nvmf_init_br" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:37.934 Cannot find device "nvmf_init_br2" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:37.934 Cannot find device "nvmf_tgt_br" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.934 Cannot find device "nvmf_tgt_br2" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:37.934 Cannot find device "nvmf_init_br" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:37.934 Cannot find device "nvmf_init_br2" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:37.934 Cannot find device "nvmf_tgt_br" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:37.934 Cannot find device "nvmf_tgt_br2" 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:37.934 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:38.199 Cannot find device "nvmf_br" 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:38.199 Cannot find device "nvmf_init_if" 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:38.199 Cannot find device "nvmf_init_if2" 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.199 13:29:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:38.199 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:38.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:10:38.458 00:10:38.458 --- 10.0.0.3 ping statistics --- 00:10:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.458 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:38.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:38.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:10:38.458 00:10:38.458 --- 10.0.0.4 ping statistics --- 00:10:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.458 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:38.458 00:10:38.458 --- 10.0.0.1 ping statistics --- 00:10:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.458 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:38.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:38.458 00:10:38.458 --- 10.0.0.2 ping statistics --- 00:10:38.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.458 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67162 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67162 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67162 ']' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.458 13:29:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.458 [2024-11-20 13:29:50.318065] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:38.458 [2024-11-20 13:29:50.318176] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.717 [2024-11-20 13:29:50.474717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.717 [2024-11-20 13:29:50.592526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.717 [2024-11-20 13:29:50.593466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.717 [2024-11-20 13:29:50.594245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.717 [2024-11-20 13:29:50.594968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.717 [2024-11-20 13:29:50.595424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.717 [2024-11-20 13:29:50.597921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:38.717 [2024-11-20 13:29:50.598070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:38.717 [2024-11-20 13:29:50.598155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:38.717 [2024-11-20 13:29:50.598169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.975 [2024-11-20 13:29:50.696965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.542 [2024-11-20 13:29:51.472181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.542 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.800 Malloc0 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.800 [2024-11-20 13:29:51.544305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.800 { 00:10:39.800 "params": { 00:10:39.800 "name": "Nvme$subsystem", 00:10:39.800 "trtype": "$TEST_TRANSPORT", 00:10:39.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.800 "adrfam": "ipv4", 00:10:39.800 "trsvcid": "$NVMF_PORT", 00:10:39.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.800 "hdgst": ${hdgst:-false}, 00:10:39.800 "ddgst": ${ddgst:-false} 00:10:39.800 }, 00:10:39.800 "method": "bdev_nvme_attach_controller" 00:10:39.800 } 00:10:39.800 EOF 00:10:39.800 )") 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:39.800 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:39.801 13:29:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.801 "params": { 00:10:39.801 "name": "Nvme1", 00:10:39.801 "trtype": "tcp", 00:10:39.801 "traddr": "10.0.0.3", 00:10:39.801 "adrfam": "ipv4", 00:10:39.801 "trsvcid": "4420", 00:10:39.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.801 "hdgst": false, 00:10:39.801 "ddgst": false 00:10:39.801 }, 00:10:39.801 "method": "bdev_nvme_attach_controller" 00:10:39.801 }' 00:10:39.801 [2024-11-20 13:29:51.604004] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:39.801 [2024-11-20 13:29:51.604097] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67204 ] 00:10:40.059 [2024-11-20 13:29:51.760732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.059 [2024-11-20 13:29:51.831241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.059 [2024-11-20 13:29:51.831391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.059 [2024-11-20 13:29:51.831401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.059 [2024-11-20 13:29:51.897858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.318 I/O targets: 00:10:40.318 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:40.318 00:10:40.318 00:10:40.318 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.318 http://cunit.sourceforge.net/ 00:10:40.318 00:10:40.318 00:10:40.318 Suite: bdevio tests on: Nvme1n1 00:10:40.318 Test: blockdev write read block ...passed 00:10:40.318 Test: blockdev write zeroes read block ...passed 00:10:40.318 Test: blockdev write zeroes read no split ...passed 00:10:40.318 Test: blockdev write zeroes read split ...passed 00:10:40.318 Test: blockdev write zeroes read split partial ...passed 00:10:40.318 Test: blockdev reset ...[2024-11-20 13:29:52.048280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:40.318 [2024-11-20 13:29:52.048390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x817180 (9): Bad file descriptor 00:10:40.318 passed 00:10:40.318 Test: blockdev write read 8 blocks ...[2024-11-20 13:29:52.064677] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:40.318 passed 00:10:40.318 Test: blockdev write read size > 128k ...passed 00:10:40.318 Test: blockdev write read invalid size ...passed 00:10:40.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:40.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:40.318 Test: blockdev write read max offset ...passed 00:10:40.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:40.318 Test: blockdev writev readv 8 blocks ...passed 00:10:40.318 Test: blockdev writev readv 30 x 1block ...passed 00:10:40.318 Test: blockdev writev readv block ...passed 00:10:40.318 Test: blockdev writev readv size > 128k ...passed 00:10:40.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:40.318 Test: blockdev comparev and writev ...[2024-11-20 13:29:52.072952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.073145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.073502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.073545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.073834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.073873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.073884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.074165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.074183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:40.318 [2024-11-20 13:29:52.074446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:40.318 passed 00:10:40.318 Test: blockdev nvme passthru rw ...passed 00:10:40.318 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:29:52.075981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.318 [2024-11-20 13:29:52.076141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:10:40.318 Test: blockdev nvme admin passthru ...qhd:002c p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.076360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.318 [2024-11-20 13:29:52.076385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.076486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.318 [2024-11-20 13:29:52.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:40.318 [2024-11-20 13:29:52.076609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.318 [2024-11-20 13:29:52.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:40.318 passed 00:10:40.318 Test: blockdev copy ...passed 00:10:40.318 00:10:40.318 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.318 suites 1 1 n/a 0 0 00:10:40.318 tests 23 23 23 0 0 00:10:40.318 asserts 152 152 152 0 n/a 00:10:40.318 00:10:40.318 Elapsed time = 0.154 seconds 00:10:40.318 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.318 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.318 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.576 rmmod nvme_tcp 00:10:40.576 rmmod nvme_fabrics 00:10:40.576 rmmod nvme_keyring 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67162 ']' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67162 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67162 ']' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67162 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67162 00:10:40.576 killing process with pid 67162 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67162' 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67162 00:10:40.576 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67162 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.836 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:41.095 00:10:41.095 real 0m3.370s 00:10:41.095 user 0m9.782s 00:10:41.095 sys 0m1.017s 00:10:41.095 ************************************ 00:10:41.095 END TEST nvmf_bdevio 00:10:41.095 ************************************ 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:41.095 ************************************ 00:10:41.095 END TEST nvmf_target_core 00:10:41.095 ************************************ 00:10:41.095 00:10:41.095 real 2m41.716s 00:10:41.095 user 7m6.724s 00:10:41.095 sys 0m54.033s 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.095 13:29:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.095 13:29:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.095 13:29:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.095 13:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.095 ************************************ 00:10:41.095 START TEST nvmf_target_extra 00:10:41.095 ************************************ 00:10:41.095 13:29:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:41.354 * Looking for test storage... 00:10:41.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.354 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.355 --rc genhtml_branch_coverage=1 00:10:41.355 --rc genhtml_function_coverage=1 00:10:41.355 --rc genhtml_legend=1 00:10:41.355 --rc geninfo_all_blocks=1 00:10:41.355 --rc geninfo_unexecuted_blocks=1 00:10:41.355 00:10:41.355 ' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.355 --rc genhtml_branch_coverage=1 00:10:41.355 --rc genhtml_function_coverage=1 00:10:41.355 --rc genhtml_legend=1 00:10:41.355 --rc geninfo_all_blocks=1 00:10:41.355 --rc geninfo_unexecuted_blocks=1 00:10:41.355 00:10:41.355 ' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.355 --rc genhtml_branch_coverage=1 00:10:41.355 --rc genhtml_function_coverage=1 00:10:41.355 --rc genhtml_legend=1 00:10:41.355 --rc geninfo_all_blocks=1 00:10:41.355 --rc geninfo_unexecuted_blocks=1 00:10:41.355 00:10:41.355 ' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.355 --rc genhtml_branch_coverage=1 00:10:41.355 --rc genhtml_function_coverage=1 00:10:41.355 --rc genhtml_legend=1 00:10:41.355 --rc geninfo_all_blocks=1 00:10:41.355 --rc geninfo_unexecuted_blocks=1 00:10:41.355 00:10:41.355 ' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.355 ************************************ 00:10:41.355 START TEST nvmf_auth_target 00:10:41.355 ************************************ 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:41.355 * Looking for test storage... 00:10:41.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.355 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.635 --rc genhtml_branch_coverage=1 00:10:41.635 --rc genhtml_function_coverage=1 00:10:41.635 --rc genhtml_legend=1 00:10:41.635 --rc geninfo_all_blocks=1 00:10:41.635 --rc geninfo_unexecuted_blocks=1 00:10:41.635 00:10:41.635 ' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.635 --rc genhtml_branch_coverage=1 00:10:41.635 --rc genhtml_function_coverage=1 00:10:41.635 --rc genhtml_legend=1 00:10:41.635 --rc geninfo_all_blocks=1 00:10:41.635 --rc geninfo_unexecuted_blocks=1 00:10:41.635 00:10:41.635 ' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.635 --rc genhtml_branch_coverage=1 00:10:41.635 --rc genhtml_function_coverage=1 00:10:41.635 --rc genhtml_legend=1 00:10:41.635 --rc geninfo_all_blocks=1 00:10:41.635 --rc geninfo_unexecuted_blocks=1 00:10:41.635 00:10:41.635 ' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.635 --rc genhtml_branch_coverage=1 00:10:41.635 --rc genhtml_function_coverage=1 00:10:41.635 --rc genhtml_legend=1 00:10:41.635 --rc geninfo_all_blocks=1 00:10:41.635 --rc geninfo_unexecuted_blocks=1 00:10:41.635 00:10:41.635 ' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.635 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.635 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.636 Cannot find device "nvmf_init_br" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.636 Cannot find device "nvmf_init_br2" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.636 Cannot find device "nvmf_tgt_br" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.636 Cannot find device "nvmf_tgt_br2" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.636 Cannot find device "nvmf_init_br" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.636 Cannot find device "nvmf_init_br2" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.636 Cannot find device "nvmf_tgt_br" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.636 Cannot find device "nvmf_tgt_br2" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.636 Cannot find device "nvmf_br" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.636 Cannot find device "nvmf_init_if" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.636 Cannot find device "nvmf_init_if2" 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.636 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:10:41.895 00:10:41.895 --- 10.0.0.3 ping statistics --- 00:10:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.895 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:41.895 00:10:41.895 --- 10.0.0.4 ping statistics --- 00:10:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.895 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:41.895 00:10:41.895 --- 10.0.0.1 ping statistics --- 00:10:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.895 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:10:41.895 00:10:41.895 --- 10.0.0.2 ping statistics --- 00:10:41.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.895 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.895 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67488 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67488 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67488 ']' 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.896 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67507 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a52e5af378cf36af9d2a45027d88fcd32d115d849f08774 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sar 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a52e5af378cf36af9d2a45027d88fcd32d115d849f08774 0 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a52e5af378cf36af9d2a45027d88fcd32d115d849f08774 0 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a52e5af378cf36af9d2a45027d88fcd32d115d849f08774 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sar 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sar 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.sar 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=46fa86d2dfc2b229b3933427197f023ef7b8e856cf45a26e9ade40ed2b867190 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ED2 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 46fa86d2dfc2b229b3933427197f023ef7b8e856cf45a26e9ade40ed2b867190 3 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 46fa86d2dfc2b229b3933427197f023ef7b8e856cf45a26e9ade40ed2b867190 3 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=46fa86d2dfc2b229b3933427197f023ef7b8e856cf45a26e9ade40ed2b867190 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:42.463 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ED2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ED2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ED2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d90a415855cc4cffa3d08835e71fa8c5 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8TY 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d90a415855cc4cffa3d08835e71fa8c5 1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d90a415855cc4cffa3d08835e71fa8c5 1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d90a415855cc4cffa3d08835e71fa8c5 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8TY 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8TY 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.8TY 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=617dbc2b0f9cd2616fabb0c755318c4990eaea731fa03020 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IoB 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 617dbc2b0f9cd2616fabb0c755318c4990eaea731fa03020 2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 617dbc2b0f9cd2616fabb0c755318c4990eaea731fa03020 2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=617dbc2b0f9cd2616fabb0c755318c4990eaea731fa03020 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IoB 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IoB 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IoB 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3b9f2581ef09c875f394010bdf0f68569963a43b380bb06 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.STL 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3b9f2581ef09c875f394010bdf0f68569963a43b380bb06 2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3b9f2581ef09c875f394010bdf0f68569963a43b380bb06 2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3b9f2581ef09c875f394010bdf0f68569963a43b380bb06 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.STL 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.STL 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.STL 00:10:42.722 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f9fc27bbe702ecbc71d99ce595565fa 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5zD 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f9fc27bbe702ecbc71d99ce595565fa 1 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f9fc27bbe702ecbc71d99ce595565fa 1 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f9fc27bbe702ecbc71d99ce595565fa 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:42.723 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5zD 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5zD 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.5zD 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8238e95524bbce8fc2152fb700cf0d4f414de038cc2a36d869ab8320b0c7ecff 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.982 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UuZ 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8238e95524bbce8fc2152fb700cf0d4f414de038cc2a36d869ab8320b0c7ecff 3 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8238e95524bbce8fc2152fb700cf0d4f414de038cc2a36d869ab8320b0c7ecff 3 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8238e95524bbce8fc2152fb700cf0d4f414de038cc2a36d869ab8320b0c7ecff 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UuZ 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UuZ 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.UuZ 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67488 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67488 ']' 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.983 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67507 /var/tmp/host.sock 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67507 ']' 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.242 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sar 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.sar 00:10:43.501 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.sar 00:10:43.760 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ED2 ]] 00:10:43.760 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ED2 00:10:43.760 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.760 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.019 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.019 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ED2 00:10:44.019 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ED2 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8TY 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.278 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8TY 00:10:44.279 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8TY 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IoB ]] 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IoB 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IoB 00:10:44.538 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IoB 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.STL 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.STL 00:10:44.796 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.STL 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.5zD ]] 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zD 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zD 00:10:45.055 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zD 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UuZ 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.UuZ 00:10:45.314 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.UuZ 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:45.572 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.831 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.397 00:10:46.397 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.397 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.397 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.654 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.654 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.654 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.655 { 00:10:46.655 "cntlid": 1, 00:10:46.655 "qid": 0, 00:10:46.655 "state": "enabled", 00:10:46.655 "thread": "nvmf_tgt_poll_group_000", 00:10:46.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:10:46.655 "listen_address": { 00:10:46.655 "trtype": "TCP", 00:10:46.655 "adrfam": "IPv4", 00:10:46.655 "traddr": "10.0.0.3", 00:10:46.655 "trsvcid": "4420" 00:10:46.655 }, 00:10:46.655 "peer_address": { 00:10:46.655 "trtype": "TCP", 00:10:46.655 "adrfam": "IPv4", 00:10:46.655 "traddr": "10.0.0.1", 00:10:46.655 "trsvcid": "45722" 00:10:46.655 }, 00:10:46.655 "auth": { 00:10:46.655 "state": "completed", 00:10:46.655 "digest": "sha256", 00:10:46.655 "dhgroup": "null" 00:10:46.655 } 00:10:46.655 } 00:10:46.655 ]' 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:46.655 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.912 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.912 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.912 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.171 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:10:47.171 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.442 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.442 00:10:52.442 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.442 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.442 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.699 { 00:10:52.699 "cntlid": 3, 00:10:52.699 "qid": 0, 00:10:52.699 "state": "enabled", 00:10:52.699 "thread": "nvmf_tgt_poll_group_000", 00:10:52.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:10:52.699 "listen_address": { 00:10:52.699 "trtype": "TCP", 00:10:52.699 "adrfam": "IPv4", 00:10:52.699 "traddr": "10.0.0.3", 00:10:52.699 "trsvcid": "4420" 00:10:52.699 }, 00:10:52.699 "peer_address": { 00:10:52.699 "trtype": "TCP", 00:10:52.699 "adrfam": "IPv4", 00:10:52.699 "traddr": "10.0.0.1", 00:10:52.699 "trsvcid": "36810" 00:10:52.699 }, 00:10:52.699 "auth": { 00:10:52.699 "state": "completed", 00:10:52.699 "digest": "sha256", 00:10:52.699 "dhgroup": "null" 00:10:52.699 } 00:10:52.699 } 00:10:52.699 ]' 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.699 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.266 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:10:53.266 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.831 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.832 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.089 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.654 00:10:54.654 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.654 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.654 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.912 { 00:10:54.912 "cntlid": 5, 00:10:54.912 "qid": 0, 00:10:54.912 "state": "enabled", 00:10:54.912 "thread": "nvmf_tgt_poll_group_000", 00:10:54.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:10:54.912 "listen_address": { 00:10:54.912 "trtype": "TCP", 00:10:54.912 "adrfam": "IPv4", 00:10:54.912 "traddr": "10.0.0.3", 00:10:54.912 "trsvcid": "4420" 00:10:54.912 }, 00:10:54.912 "peer_address": { 00:10:54.912 "trtype": "TCP", 00:10:54.912 "adrfam": "IPv4", 00:10:54.912 "traddr": "10.0.0.1", 00:10:54.912 "trsvcid": "36836" 00:10:54.912 }, 00:10:54.912 "auth": { 00:10:54.912 "state": "completed", 00:10:54.912 "digest": "sha256", 00:10:54.912 "dhgroup": "null" 00:10:54.912 } 00:10:54.912 } 00:10:54.912 ]' 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.912 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.913 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.479 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:10:55.479 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.045 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:56.304 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:56.870 00:10:56.870 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.870 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.870 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.131 { 00:10:57.131 "cntlid": 7, 00:10:57.131 "qid": 0, 00:10:57.131 "state": "enabled", 00:10:57.131 "thread": "nvmf_tgt_poll_group_000", 00:10:57.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:10:57.131 "listen_address": { 00:10:57.131 "trtype": "TCP", 00:10:57.131 "adrfam": "IPv4", 00:10:57.131 "traddr": "10.0.0.3", 00:10:57.131 "trsvcid": "4420" 00:10:57.131 }, 00:10:57.131 "peer_address": { 00:10:57.131 "trtype": "TCP", 00:10:57.131 "adrfam": "IPv4", 00:10:57.131 "traddr": "10.0.0.1", 00:10:57.131 "trsvcid": "36872" 00:10:57.131 }, 00:10:57.131 "auth": { 00:10:57.131 "state": "completed", 00:10:57.131 "digest": "sha256", 00:10:57.131 "dhgroup": "null" 00:10:57.131 } 00:10:57.131 } 00:10:57.131 ]' 00:10:57.131 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.131 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.131 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.131 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:57.131 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.395 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.395 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.395 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.653 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:10:57.653 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.220 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.788 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.046 00:10:59.046 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.046 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.046 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.305 { 00:10:59.305 "cntlid": 9, 00:10:59.305 "qid": 0, 00:10:59.305 "state": "enabled", 00:10:59.305 "thread": "nvmf_tgt_poll_group_000", 00:10:59.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:10:59.305 "listen_address": { 00:10:59.305 "trtype": "TCP", 00:10:59.305 "adrfam": "IPv4", 00:10:59.305 "traddr": "10.0.0.3", 00:10:59.305 "trsvcid": "4420" 00:10:59.305 }, 00:10:59.305 "peer_address": { 00:10:59.305 "trtype": "TCP", 00:10:59.305 "adrfam": "IPv4", 00:10:59.305 "traddr": "10.0.0.1", 00:10:59.305 "trsvcid": "36902" 00:10:59.305 }, 00:10:59.305 "auth": { 00:10:59.305 "state": "completed", 00:10:59.305 "digest": "sha256", 00:10:59.305 "dhgroup": "ffdhe2048" 00:10:59.305 } 00:10:59.305 } 00:10:59.305 ]' 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.305 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.564 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.564 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.564 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.564 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.564 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.822 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:10:59.822 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.762 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.021 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.279 00:11:01.279 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.279 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.279 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.538 { 00:11:01.538 "cntlid": 11, 00:11:01.538 "qid": 0, 00:11:01.538 "state": "enabled", 00:11:01.538 "thread": "nvmf_tgt_poll_group_000", 00:11:01.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:01.538 "listen_address": { 00:11:01.538 "trtype": "TCP", 00:11:01.538 "adrfam": "IPv4", 00:11:01.538 "traddr": "10.0.0.3", 00:11:01.538 "trsvcid": "4420" 00:11:01.538 }, 00:11:01.538 "peer_address": { 00:11:01.538 "trtype": "TCP", 00:11:01.538 "adrfam": "IPv4", 00:11:01.538 "traddr": "10.0.0.1", 00:11:01.538 "trsvcid": "44502" 00:11:01.538 }, 00:11:01.538 "auth": { 00:11:01.538 "state": "completed", 00:11:01.538 "digest": "sha256", 00:11:01.538 "dhgroup": "ffdhe2048" 00:11:01.538 } 00:11:01.538 } 00:11:01.538 ]' 00:11:01.538 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.797 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.056 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:02.056 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.992 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.560 00:11:03.560 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.560 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.560 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.819 { 00:11:03.819 "cntlid": 13, 00:11:03.819 "qid": 0, 00:11:03.819 "state": "enabled", 00:11:03.819 "thread": "nvmf_tgt_poll_group_000", 00:11:03.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:03.819 "listen_address": { 00:11:03.819 "trtype": "TCP", 00:11:03.819 "adrfam": "IPv4", 00:11:03.819 "traddr": "10.0.0.3", 00:11:03.819 "trsvcid": "4420" 00:11:03.819 }, 00:11:03.819 "peer_address": { 00:11:03.819 "trtype": "TCP", 00:11:03.819 "adrfam": "IPv4", 00:11:03.819 "traddr": "10.0.0.1", 00:11:03.819 "trsvcid": "44510" 00:11:03.819 }, 00:11:03.819 "auth": { 00:11:03.819 "state": "completed", 00:11:03.819 "digest": "sha256", 00:11:03.819 "dhgroup": "ffdhe2048" 00:11:03.819 } 00:11:03.819 } 00:11:03.819 ]' 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.819 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.386 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:04.386 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:04.954 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.213 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.780 00:11:05.780 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.780 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.780 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.039 { 00:11:06.039 "cntlid": 15, 00:11:06.039 "qid": 0, 00:11:06.039 "state": "enabled", 00:11:06.039 "thread": "nvmf_tgt_poll_group_000", 00:11:06.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:06.039 "listen_address": { 00:11:06.039 "trtype": "TCP", 00:11:06.039 "adrfam": "IPv4", 00:11:06.039 "traddr": "10.0.0.3", 00:11:06.039 "trsvcid": "4420" 00:11:06.039 }, 00:11:06.039 "peer_address": { 00:11:06.039 "trtype": "TCP", 00:11:06.039 "adrfam": "IPv4", 00:11:06.039 "traddr": "10.0.0.1", 00:11:06.039 "trsvcid": "44542" 00:11:06.039 }, 00:11:06.039 "auth": { 00:11:06.039 "state": "completed", 00:11:06.039 "digest": "sha256", 00:11:06.039 "dhgroup": "ffdhe2048" 00:11:06.039 } 00:11:06.039 } 00:11:06.039 ]' 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.039 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.298 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:06.298 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:06.865 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.865 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:06.865 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.865 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.123 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.123 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.123 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.123 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.123 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.383 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.641 00:11:07.641 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.641 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.641 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.900 { 00:11:07.900 "cntlid": 17, 00:11:07.900 "qid": 0, 00:11:07.900 "state": "enabled", 00:11:07.900 "thread": "nvmf_tgt_poll_group_000", 00:11:07.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:07.900 "listen_address": { 00:11:07.900 "trtype": "TCP", 00:11:07.900 "adrfam": "IPv4", 00:11:07.900 "traddr": "10.0.0.3", 00:11:07.900 "trsvcid": "4420" 00:11:07.900 }, 00:11:07.900 "peer_address": { 00:11:07.900 "trtype": "TCP", 00:11:07.900 "adrfam": "IPv4", 00:11:07.900 "traddr": "10.0.0.1", 00:11:07.900 "trsvcid": "44570" 00:11:07.900 }, 00:11:07.900 "auth": { 00:11:07.900 "state": "completed", 00:11:07.900 "digest": "sha256", 00:11:07.900 "dhgroup": "ffdhe3072" 00:11:07.900 } 00:11:07.900 } 00:11:07.900 ]' 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.900 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.160 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.160 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.160 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.160 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.418 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:08.418 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.986 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.553 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.812 00:11:09.812 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.812 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.812 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.070 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.070 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.070 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.070 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.329 { 00:11:10.329 "cntlid": 19, 00:11:10.329 "qid": 0, 00:11:10.329 "state": "enabled", 00:11:10.329 "thread": "nvmf_tgt_poll_group_000", 00:11:10.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:10.329 "listen_address": { 00:11:10.329 "trtype": "TCP", 00:11:10.329 "adrfam": "IPv4", 00:11:10.329 "traddr": "10.0.0.3", 00:11:10.329 "trsvcid": "4420" 00:11:10.329 }, 00:11:10.329 "peer_address": { 00:11:10.329 "trtype": "TCP", 00:11:10.329 "adrfam": "IPv4", 00:11:10.329 "traddr": "10.0.0.1", 00:11:10.329 "trsvcid": "44604" 00:11:10.329 }, 00:11:10.329 "auth": { 00:11:10.329 "state": "completed", 00:11:10.329 "digest": "sha256", 00:11:10.329 "dhgroup": "ffdhe3072" 00:11:10.329 } 00:11:10.329 } 00:11:10.329 ]' 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.329 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.587 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:10.588 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:11.522 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:11.523 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.781 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.039 00:11:12.039 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.040 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.040 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.298 { 00:11:12.298 "cntlid": 21, 00:11:12.298 "qid": 0, 00:11:12.298 "state": "enabled", 00:11:12.298 "thread": "nvmf_tgt_poll_group_000", 00:11:12.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:12.298 "listen_address": { 00:11:12.298 "trtype": "TCP", 00:11:12.298 "adrfam": "IPv4", 00:11:12.298 "traddr": "10.0.0.3", 00:11:12.298 "trsvcid": "4420" 00:11:12.298 }, 00:11:12.298 "peer_address": { 00:11:12.298 "trtype": "TCP", 00:11:12.298 "adrfam": "IPv4", 00:11:12.298 "traddr": "10.0.0.1", 00:11:12.298 "trsvcid": "60066" 00:11:12.298 }, 00:11:12.298 "auth": { 00:11:12.298 "state": "completed", 00:11:12.298 "digest": "sha256", 00:11:12.298 "dhgroup": "ffdhe3072" 00:11:12.298 } 00:11:12.298 } 00:11:12.298 ]' 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.298 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.557 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.557 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.557 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.557 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.557 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.815 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:12.815 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:13.382 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:13.641 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.900 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.163 00:11:14.163 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.163 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.163 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.432 { 00:11:14.432 "cntlid": 23, 00:11:14.432 "qid": 0, 00:11:14.432 "state": "enabled", 00:11:14.432 "thread": "nvmf_tgt_poll_group_000", 00:11:14.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:14.432 "listen_address": { 00:11:14.432 "trtype": "TCP", 00:11:14.432 "adrfam": "IPv4", 00:11:14.432 "traddr": "10.0.0.3", 00:11:14.432 "trsvcid": "4420" 00:11:14.432 }, 00:11:14.432 "peer_address": { 00:11:14.432 "trtype": "TCP", 00:11:14.432 "adrfam": "IPv4", 00:11:14.432 "traddr": "10.0.0.1", 00:11:14.432 "trsvcid": "60088" 00:11:14.432 }, 00:11:14.432 "auth": { 00:11:14.432 "state": "completed", 00:11:14.432 "digest": "sha256", 00:11:14.432 "dhgroup": "ffdhe3072" 00:11:14.432 } 00:11:14.432 } 00:11:14.432 ]' 00:11:14.432 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.691 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.951 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:14.951 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:15.887 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.146 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.405 00:11:16.405 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.405 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.405 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.972 { 00:11:16.972 "cntlid": 25, 00:11:16.972 "qid": 0, 00:11:16.972 "state": "enabled", 00:11:16.972 "thread": "nvmf_tgt_poll_group_000", 00:11:16.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:16.972 "listen_address": { 00:11:16.972 "trtype": "TCP", 00:11:16.972 "adrfam": "IPv4", 00:11:16.972 "traddr": "10.0.0.3", 00:11:16.972 "trsvcid": "4420" 00:11:16.972 }, 00:11:16.972 "peer_address": { 00:11:16.972 "trtype": "TCP", 00:11:16.972 "adrfam": "IPv4", 00:11:16.972 "traddr": "10.0.0.1", 00:11:16.972 "trsvcid": "60114" 00:11:16.972 }, 00:11:16.972 "auth": { 00:11:16.972 "state": "completed", 00:11:16.972 "digest": "sha256", 00:11:16.972 "dhgroup": "ffdhe4096" 00:11:16.972 } 00:11:16.972 } 00:11:16.972 ]' 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.972 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.231 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:17.231 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.166 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.166 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.732 00:11:18.732 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.732 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.732 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.992 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.992 { 00:11:18.992 "cntlid": 27, 00:11:18.992 "qid": 0, 00:11:18.992 "state": "enabled", 00:11:18.992 "thread": "nvmf_tgt_poll_group_000", 00:11:18.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:18.992 "listen_address": { 00:11:18.992 "trtype": "TCP", 00:11:18.992 "adrfam": "IPv4", 00:11:18.992 "traddr": "10.0.0.3", 00:11:18.993 "trsvcid": "4420" 00:11:18.993 }, 00:11:18.993 "peer_address": { 00:11:18.993 "trtype": "TCP", 00:11:18.993 "adrfam": "IPv4", 00:11:18.993 "traddr": "10.0.0.1", 00:11:18.993 "trsvcid": "60152" 00:11:18.993 }, 00:11:18.993 "auth": { 00:11:18.993 "state": "completed", 00:11:18.993 "digest": "sha256", 00:11:18.993 "dhgroup": "ffdhe4096" 00:11:18.993 } 00:11:18.993 } 00:11:18.993 ]' 00:11:18.993 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.993 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.993 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.251 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.251 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.251 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.251 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.251 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.508 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:19.508 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.075 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.334 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.595 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.595 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.595 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.595 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.854 00:11:20.854 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.854 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.854 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.114 { 00:11:21.114 "cntlid": 29, 00:11:21.114 "qid": 0, 00:11:21.114 "state": "enabled", 00:11:21.114 "thread": "nvmf_tgt_poll_group_000", 00:11:21.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:21.114 "listen_address": { 00:11:21.114 "trtype": "TCP", 00:11:21.114 "adrfam": "IPv4", 00:11:21.114 "traddr": "10.0.0.3", 00:11:21.114 "trsvcid": "4420" 00:11:21.114 }, 00:11:21.114 "peer_address": { 00:11:21.114 "trtype": "TCP", 00:11:21.114 "adrfam": "IPv4", 00:11:21.114 "traddr": "10.0.0.1", 00:11:21.114 "trsvcid": "60176" 00:11:21.114 }, 00:11:21.114 "auth": { 00:11:21.114 "state": "completed", 00:11:21.114 "digest": "sha256", 00:11:21.114 "dhgroup": "ffdhe4096" 00:11:21.114 } 00:11:21.114 } 00:11:21.114 ]' 00:11:21.114 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.114 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.114 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.372 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.372 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.372 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.372 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.372 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.630 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:21.630 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:22.564 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:22.822 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.823 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.823 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.823 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:22.823 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.823 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.081 00:11:23.081 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.081 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.081 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.340 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.341 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.341 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.341 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.600 { 00:11:23.600 "cntlid": 31, 00:11:23.600 "qid": 0, 00:11:23.600 "state": "enabled", 00:11:23.600 "thread": "nvmf_tgt_poll_group_000", 00:11:23.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:23.600 "listen_address": { 00:11:23.600 "trtype": "TCP", 00:11:23.600 "adrfam": "IPv4", 00:11:23.600 "traddr": "10.0.0.3", 00:11:23.600 "trsvcid": "4420" 00:11:23.600 }, 00:11:23.600 "peer_address": { 00:11:23.600 "trtype": "TCP", 00:11:23.600 "adrfam": "IPv4", 00:11:23.600 "traddr": "10.0.0.1", 00:11:23.600 "trsvcid": "51776" 00:11:23.600 }, 00:11:23.600 "auth": { 00:11:23.600 "state": "completed", 00:11:23.600 "digest": "sha256", 00:11:23.600 "dhgroup": "ffdhe4096" 00:11:23.600 } 00:11:23.600 } 00:11:23.600 ]' 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.600 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.858 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:23.858 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.794 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.053 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.620 00:11:25.620 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.620 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.620 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.879 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.879 { 00:11:25.879 "cntlid": 33, 00:11:25.879 "qid": 0, 00:11:25.879 "state": "enabled", 00:11:25.879 "thread": "nvmf_tgt_poll_group_000", 00:11:25.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:25.879 "listen_address": { 00:11:25.879 "trtype": "TCP", 00:11:25.879 "adrfam": "IPv4", 00:11:25.879 "traddr": "10.0.0.3", 00:11:25.879 "trsvcid": "4420" 00:11:25.879 }, 00:11:25.879 "peer_address": { 00:11:25.879 "trtype": "TCP", 00:11:25.879 "adrfam": "IPv4", 00:11:25.879 "traddr": "10.0.0.1", 00:11:25.879 "trsvcid": "51798" 00:11:25.879 }, 00:11:25.879 "auth": { 00:11:25.879 "state": "completed", 00:11:25.879 "digest": "sha256", 00:11:25.880 "dhgroup": "ffdhe6144" 00:11:25.880 } 00:11:25.880 } 00:11:25.880 ]' 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.880 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.448 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:26.448 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.026 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.284 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.850 00:11:27.850 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.850 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.850 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.419 { 00:11:28.419 "cntlid": 35, 00:11:28.419 "qid": 0, 00:11:28.419 "state": "enabled", 00:11:28.419 "thread": "nvmf_tgt_poll_group_000", 00:11:28.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:28.419 "listen_address": { 00:11:28.419 "trtype": "TCP", 00:11:28.419 "adrfam": "IPv4", 00:11:28.419 "traddr": "10.0.0.3", 00:11:28.419 "trsvcid": "4420" 00:11:28.419 }, 00:11:28.419 "peer_address": { 00:11:28.419 "trtype": "TCP", 00:11:28.419 "adrfam": "IPv4", 00:11:28.419 "traddr": "10.0.0.1", 00:11:28.419 "trsvcid": "51846" 00:11:28.419 }, 00:11:28.419 "auth": { 00:11:28.419 "state": "completed", 00:11:28.419 "digest": "sha256", 00:11:28.419 "dhgroup": "ffdhe6144" 00:11:28.419 } 00:11:28.419 } 00:11:28.419 ]' 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.419 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.677 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:28.677 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:29.612 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.613 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.871 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.438 00:11:30.438 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.438 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.438 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.697 { 00:11:30.697 "cntlid": 37, 00:11:30.697 "qid": 0, 00:11:30.697 "state": "enabled", 00:11:30.697 "thread": "nvmf_tgt_poll_group_000", 00:11:30.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:30.697 "listen_address": { 00:11:30.697 "trtype": "TCP", 00:11:30.697 "adrfam": "IPv4", 00:11:30.697 "traddr": "10.0.0.3", 00:11:30.697 "trsvcid": "4420" 00:11:30.697 }, 00:11:30.697 "peer_address": { 00:11:30.697 "trtype": "TCP", 00:11:30.697 "adrfam": "IPv4", 00:11:30.697 "traddr": "10.0.0.1", 00:11:30.697 "trsvcid": "51852" 00:11:30.697 }, 00:11:30.697 "auth": { 00:11:30.697 "state": "completed", 00:11:30.697 "digest": "sha256", 00:11:30.697 "dhgroup": "ffdhe6144" 00:11:30.697 } 00:11:30.697 } 00:11:30.697 ]' 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:30.697 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.956 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.956 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.956 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.214 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:31.214 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:31.780 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.347 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.915 00:11:32.915 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.915 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.915 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.173 { 00:11:33.173 "cntlid": 39, 00:11:33.173 "qid": 0, 00:11:33.173 "state": "enabled", 00:11:33.173 "thread": "nvmf_tgt_poll_group_000", 00:11:33.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:33.173 "listen_address": { 00:11:33.173 "trtype": "TCP", 00:11:33.173 "adrfam": "IPv4", 00:11:33.173 "traddr": "10.0.0.3", 00:11:33.173 "trsvcid": "4420" 00:11:33.173 }, 00:11:33.173 "peer_address": { 00:11:33.173 "trtype": "TCP", 00:11:33.173 "adrfam": "IPv4", 00:11:33.173 "traddr": "10.0.0.1", 00:11:33.173 "trsvcid": "37972" 00:11:33.173 }, 00:11:33.173 "auth": { 00:11:33.173 "state": "completed", 00:11:33.173 "digest": "sha256", 00:11:33.173 "dhgroup": "ffdhe6144" 00:11:33.173 } 00:11:33.173 } 00:11:33.173 ]' 00:11:33.173 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.173 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.767 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:33.767 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:34.335 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.594 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.595 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.595 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.531 00:11:35.531 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.531 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.531 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.531 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.791 { 00:11:35.791 "cntlid": 41, 00:11:35.791 "qid": 0, 00:11:35.791 "state": "enabled", 00:11:35.791 "thread": "nvmf_tgt_poll_group_000", 00:11:35.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:35.791 "listen_address": { 00:11:35.791 "trtype": "TCP", 00:11:35.791 "adrfam": "IPv4", 00:11:35.791 "traddr": "10.0.0.3", 00:11:35.791 "trsvcid": "4420" 00:11:35.791 }, 00:11:35.791 "peer_address": { 00:11:35.791 "trtype": "TCP", 00:11:35.791 "adrfam": "IPv4", 00:11:35.791 "traddr": "10.0.0.1", 00:11:35.791 "trsvcid": "38008" 00:11:35.791 }, 00:11:35.791 "auth": { 00:11:35.791 "state": "completed", 00:11:35.791 "digest": "sha256", 00:11:35.791 "dhgroup": "ffdhe8192" 00:11:35.791 } 00:11:35.791 } 00:11:35.791 ]' 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.791 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.051 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:36.051 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:36.985 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.245 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.812 00:11:37.812 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.812 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.812 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.379 { 00:11:38.379 "cntlid": 43, 00:11:38.379 "qid": 0, 00:11:38.379 "state": "enabled", 00:11:38.379 "thread": "nvmf_tgt_poll_group_000", 00:11:38.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:38.379 "listen_address": { 00:11:38.379 "trtype": "TCP", 00:11:38.379 "adrfam": "IPv4", 00:11:38.379 "traddr": "10.0.0.3", 00:11:38.379 "trsvcid": "4420" 00:11:38.379 }, 00:11:38.379 "peer_address": { 00:11:38.379 "trtype": "TCP", 00:11:38.379 "adrfam": "IPv4", 00:11:38.379 "traddr": "10.0.0.1", 00:11:38.379 "trsvcid": "38038" 00:11:38.379 }, 00:11:38.379 "auth": { 00:11:38.379 "state": "completed", 00:11:38.379 "digest": "sha256", 00:11:38.379 "dhgroup": "ffdhe8192" 00:11:38.379 } 00:11:38.379 } 00:11:38.379 ]' 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.379 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.944 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:38.944 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.510 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.769 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.707 00:11:40.707 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.707 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.707 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.967 { 00:11:40.967 "cntlid": 45, 00:11:40.967 "qid": 0, 00:11:40.967 "state": "enabled", 00:11:40.967 "thread": "nvmf_tgt_poll_group_000", 00:11:40.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:40.967 "listen_address": { 00:11:40.967 "trtype": "TCP", 00:11:40.967 "adrfam": "IPv4", 00:11:40.967 "traddr": "10.0.0.3", 00:11:40.967 "trsvcid": "4420" 00:11:40.967 }, 00:11:40.967 "peer_address": { 00:11:40.967 "trtype": "TCP", 00:11:40.967 "adrfam": "IPv4", 00:11:40.967 "traddr": "10.0.0.1", 00:11:40.967 "trsvcid": "38076" 00:11:40.967 }, 00:11:40.967 "auth": { 00:11:40.967 "state": "completed", 00:11:40.967 "digest": "sha256", 00:11:40.967 "dhgroup": "ffdhe8192" 00:11:40.967 } 00:11:40.967 } 00:11:40.967 ]' 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.967 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.226 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.226 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.226 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.485 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:41.485 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.053 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.621 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.188 00:11:43.188 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.188 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.188 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.447 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.447 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.447 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.447 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.448 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.448 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.448 { 00:11:43.448 "cntlid": 47, 00:11:43.448 "qid": 0, 00:11:43.448 "state": "enabled", 00:11:43.448 "thread": "nvmf_tgt_poll_group_000", 00:11:43.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:43.448 "listen_address": { 00:11:43.448 "trtype": "TCP", 00:11:43.448 "adrfam": "IPv4", 00:11:43.448 "traddr": "10.0.0.3", 00:11:43.448 "trsvcid": "4420" 00:11:43.448 }, 00:11:43.448 "peer_address": { 00:11:43.448 "trtype": "TCP", 00:11:43.448 "adrfam": "IPv4", 00:11:43.448 "traddr": "10.0.0.1", 00:11:43.448 "trsvcid": "47650" 00:11:43.448 }, 00:11:43.448 "auth": { 00:11:43.448 "state": "completed", 00:11:43.448 "digest": "sha256", 00:11:43.448 "dhgroup": "ffdhe8192" 00:11:43.448 } 00:11:43.448 } 00:11:43.448 ]' 00:11:43.448 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.448 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.448 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.706 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.706 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.706 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.706 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.706 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.965 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:43.965 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.532 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.791 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.049 00:11:45.308 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.308 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.308 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.566 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.566 { 00:11:45.566 "cntlid": 49, 00:11:45.566 "qid": 0, 00:11:45.566 "state": "enabled", 00:11:45.566 "thread": "nvmf_tgt_poll_group_000", 00:11:45.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:45.566 "listen_address": { 00:11:45.566 "trtype": "TCP", 00:11:45.566 "adrfam": "IPv4", 00:11:45.566 "traddr": "10.0.0.3", 00:11:45.566 "trsvcid": "4420" 00:11:45.566 }, 00:11:45.566 "peer_address": { 00:11:45.566 "trtype": "TCP", 00:11:45.566 "adrfam": "IPv4", 00:11:45.566 "traddr": "10.0.0.1", 00:11:45.566 "trsvcid": "47682" 00:11:45.566 }, 00:11:45.566 "auth": { 00:11:45.566 "state": "completed", 00:11:45.566 "digest": "sha384", 00:11:45.566 "dhgroup": "null" 00:11:45.566 } 00:11:45.567 } 00:11:45.567 ]' 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.567 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.867 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:45.867 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.862 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.121 00:11:47.389 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.389 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.389 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.648 { 00:11:47.648 "cntlid": 51, 00:11:47.648 "qid": 0, 00:11:47.648 "state": "enabled", 00:11:47.648 "thread": "nvmf_tgt_poll_group_000", 00:11:47.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:47.648 "listen_address": { 00:11:47.648 "trtype": "TCP", 00:11:47.648 "adrfam": "IPv4", 00:11:47.648 "traddr": "10.0.0.3", 00:11:47.648 "trsvcid": "4420" 00:11:47.648 }, 00:11:47.648 "peer_address": { 00:11:47.648 "trtype": "TCP", 00:11:47.648 "adrfam": "IPv4", 00:11:47.648 "traddr": "10.0.0.1", 00:11:47.648 "trsvcid": "47714" 00:11:47.648 }, 00:11:47.648 "auth": { 00:11:47.648 "state": "completed", 00:11:47.648 "digest": "sha384", 00:11:47.648 "dhgroup": "null" 00:11:47.648 } 00:11:47.648 } 00:11:47.648 ]' 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.648 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.212 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:48.212 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.779 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.038 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.298 00:11:49.298 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.298 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.298 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.557 { 00:11:49.557 "cntlid": 53, 00:11:49.557 "qid": 0, 00:11:49.557 "state": "enabled", 00:11:49.557 "thread": "nvmf_tgt_poll_group_000", 00:11:49.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:49.557 "listen_address": { 00:11:49.557 "trtype": "TCP", 00:11:49.557 "adrfam": "IPv4", 00:11:49.557 "traddr": "10.0.0.3", 00:11:49.557 "trsvcid": "4420" 00:11:49.557 }, 00:11:49.557 "peer_address": { 00:11:49.557 "trtype": "TCP", 00:11:49.557 "adrfam": "IPv4", 00:11:49.557 "traddr": "10.0.0.1", 00:11:49.557 "trsvcid": "47750" 00:11:49.557 }, 00:11:49.557 "auth": { 00:11:49.557 "state": "completed", 00:11:49.557 "digest": "sha384", 00:11:49.557 "dhgroup": "null" 00:11:49.557 } 00:11:49.557 } 00:11:49.557 ]' 00:11:49.557 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.816 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.074 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:50.074 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.008 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.575 00:11:51.575 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.575 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.575 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.833 { 00:11:51.833 "cntlid": 55, 00:11:51.833 "qid": 0, 00:11:51.833 "state": "enabled", 00:11:51.833 "thread": "nvmf_tgt_poll_group_000", 00:11:51.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:51.833 "listen_address": { 00:11:51.833 "trtype": "TCP", 00:11:51.833 "adrfam": "IPv4", 00:11:51.833 "traddr": "10.0.0.3", 00:11:51.833 "trsvcid": "4420" 00:11:51.833 }, 00:11:51.833 "peer_address": { 00:11:51.833 "trtype": "TCP", 00:11:51.833 "adrfam": "IPv4", 00:11:51.833 "traddr": "10.0.0.1", 00:11:51.833 "trsvcid": "46778" 00:11:51.833 }, 00:11:51.833 "auth": { 00:11:51.833 "state": "completed", 00:11:51.833 "digest": "sha384", 00:11:51.833 "dhgroup": "null" 00:11:51.833 } 00:11:51.833 } 00:11:51.833 ]' 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.833 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.399 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:52.400 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.966 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.225 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.483 00:11:53.483 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.483 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.483 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.741 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.742 { 00:11:53.742 "cntlid": 57, 00:11:53.742 "qid": 0, 00:11:53.742 "state": "enabled", 00:11:53.742 "thread": "nvmf_tgt_poll_group_000", 00:11:53.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:53.742 "listen_address": { 00:11:53.742 "trtype": "TCP", 00:11:53.742 "adrfam": "IPv4", 00:11:53.742 "traddr": "10.0.0.3", 00:11:53.742 "trsvcid": "4420" 00:11:53.742 }, 00:11:53.742 "peer_address": { 00:11:53.742 "trtype": "TCP", 00:11:53.742 "adrfam": "IPv4", 00:11:53.742 "traddr": "10.0.0.1", 00:11:53.742 "trsvcid": "46802" 00:11:53.742 }, 00:11:53.742 "auth": { 00:11:53.742 "state": "completed", 00:11:53.742 "digest": "sha384", 00:11:53.742 "dhgroup": "ffdhe2048" 00:11:53.742 } 00:11:53.742 } 00:11:53.742 ]' 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.742 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.000 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.000 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.000 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.258 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:54.259 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:54.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.391 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.650 00:11:55.650 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.650 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.650 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.909 { 00:11:55.909 "cntlid": 59, 00:11:55.909 "qid": 0, 00:11:55.909 "state": "enabled", 00:11:55.909 "thread": "nvmf_tgt_poll_group_000", 00:11:55.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:55.909 "listen_address": { 00:11:55.909 "trtype": "TCP", 00:11:55.909 "adrfam": "IPv4", 00:11:55.909 "traddr": "10.0.0.3", 00:11:55.909 "trsvcid": "4420" 00:11:55.909 }, 00:11:55.909 "peer_address": { 00:11:55.909 "trtype": "TCP", 00:11:55.909 "adrfam": "IPv4", 00:11:55.909 "traddr": "10.0.0.1", 00:11:55.909 "trsvcid": "46832" 00:11:55.909 }, 00:11:55.909 "auth": { 00:11:55.909 "state": "completed", 00:11:55.909 "digest": "sha384", 00:11:55.909 "dhgroup": "ffdhe2048" 00:11:55.909 } 00:11:55.909 } 00:11:55.909 ]' 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.909 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.168 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.169 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.169 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.428 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:56.428 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:11:56.994 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.994 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:56.994 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.995 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.995 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.995 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.995 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:56.995 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.252 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.253 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.253 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.820 00:11:57.820 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.820 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.820 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.079 { 00:11:58.079 "cntlid": 61, 00:11:58.079 "qid": 0, 00:11:58.079 "state": "enabled", 00:11:58.079 "thread": "nvmf_tgt_poll_group_000", 00:11:58.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:11:58.079 "listen_address": { 00:11:58.079 "trtype": "TCP", 00:11:58.079 "adrfam": "IPv4", 00:11:58.079 "traddr": "10.0.0.3", 00:11:58.079 "trsvcid": "4420" 00:11:58.079 }, 00:11:58.079 "peer_address": { 00:11:58.079 "trtype": "TCP", 00:11:58.079 "adrfam": "IPv4", 00:11:58.079 "traddr": "10.0.0.1", 00:11:58.079 "trsvcid": "46862" 00:11:58.079 }, 00:11:58.079 "auth": { 00:11:58.079 "state": "completed", 00:11:58.079 "digest": "sha384", 00:11:58.079 "dhgroup": "ffdhe2048" 00:11:58.079 } 00:11:58.079 } 00:11:58.079 ]' 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.079 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.079 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.079 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.079 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.647 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:58.647 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.216 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.784 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:00.043 00:12:00.043 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.043 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.043 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.301 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.301 { 00:12:00.302 "cntlid": 63, 00:12:00.302 "qid": 0, 00:12:00.302 "state": "enabled", 00:12:00.302 "thread": "nvmf_tgt_poll_group_000", 00:12:00.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:00.302 "listen_address": { 00:12:00.302 "trtype": "TCP", 00:12:00.302 "adrfam": "IPv4", 00:12:00.302 "traddr": "10.0.0.3", 00:12:00.302 "trsvcid": "4420" 00:12:00.302 }, 00:12:00.302 "peer_address": { 00:12:00.302 "trtype": "TCP", 00:12:00.302 "adrfam": "IPv4", 00:12:00.302 "traddr": "10.0.0.1", 00:12:00.302 "trsvcid": "46888" 00:12:00.302 }, 00:12:00.302 "auth": { 00:12:00.302 "state": "completed", 00:12:00.302 "digest": "sha384", 00:12:00.302 "dhgroup": "ffdhe2048" 00:12:00.302 } 00:12:00.302 } 00:12:00.302 ]' 00:12:00.302 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.302 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.302 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.560 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.560 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.560 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.560 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.560 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.818 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:00.818 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:01.753 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.012 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.270 00:12:02.270 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.270 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.270 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.837 { 00:12:02.837 "cntlid": 65, 00:12:02.837 "qid": 0, 00:12:02.837 "state": "enabled", 00:12:02.837 "thread": "nvmf_tgt_poll_group_000", 00:12:02.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:02.837 "listen_address": { 00:12:02.837 "trtype": "TCP", 00:12:02.837 "adrfam": "IPv4", 00:12:02.837 "traddr": "10.0.0.3", 00:12:02.837 "trsvcid": "4420" 00:12:02.837 }, 00:12:02.837 "peer_address": { 00:12:02.837 "trtype": "TCP", 00:12:02.837 "adrfam": "IPv4", 00:12:02.837 "traddr": "10.0.0.1", 00:12:02.837 "trsvcid": "60694" 00:12:02.837 }, 00:12:02.837 "auth": { 00:12:02.837 "state": "completed", 00:12:02.837 "digest": "sha384", 00:12:02.837 "dhgroup": "ffdhe3072" 00:12:02.837 } 00:12:02.837 } 00:12:02.837 ]' 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.837 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.095 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:03.095 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.031 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.291 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.859 00:12:04.859 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.859 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.859 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.118 { 00:12:05.118 "cntlid": 67, 00:12:05.118 "qid": 0, 00:12:05.118 "state": "enabled", 00:12:05.118 "thread": "nvmf_tgt_poll_group_000", 00:12:05.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:05.118 "listen_address": { 00:12:05.118 "trtype": "TCP", 00:12:05.118 "adrfam": "IPv4", 00:12:05.118 "traddr": "10.0.0.3", 00:12:05.118 "trsvcid": "4420" 00:12:05.118 }, 00:12:05.118 "peer_address": { 00:12:05.118 "trtype": "TCP", 00:12:05.118 "adrfam": "IPv4", 00:12:05.118 "traddr": "10.0.0.1", 00:12:05.118 "trsvcid": "60728" 00:12:05.118 }, 00:12:05.118 "auth": { 00:12:05.118 "state": "completed", 00:12:05.118 "digest": "sha384", 00:12:05.118 "dhgroup": "ffdhe3072" 00:12:05.118 } 00:12:05.118 } 00:12:05.118 ]' 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.118 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.118 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.118 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.118 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.377 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:05.377 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.311 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.569 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.570 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.136 00:12:07.136 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.136 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.136 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.406 { 00:12:07.406 "cntlid": 69, 00:12:07.406 "qid": 0, 00:12:07.406 "state": "enabled", 00:12:07.406 "thread": "nvmf_tgt_poll_group_000", 00:12:07.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:07.406 "listen_address": { 00:12:07.406 "trtype": "TCP", 00:12:07.406 "adrfam": "IPv4", 00:12:07.406 "traddr": "10.0.0.3", 00:12:07.406 "trsvcid": "4420" 00:12:07.406 }, 00:12:07.406 "peer_address": { 00:12:07.406 "trtype": "TCP", 00:12:07.406 "adrfam": "IPv4", 00:12:07.406 "traddr": "10.0.0.1", 00:12:07.406 "trsvcid": "60764" 00:12:07.406 }, 00:12:07.406 "auth": { 00:12:07.406 "state": "completed", 00:12:07.406 "digest": "sha384", 00:12:07.406 "dhgroup": "ffdhe3072" 00:12:07.406 } 00:12:07.406 } 00:12:07.406 ]' 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.406 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.672 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.672 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.672 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.930 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:07.930 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:08.866 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.125 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.385 00:12:09.385 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.385 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.385 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.715 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.715 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.715 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.715 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.973 { 00:12:09.973 "cntlid": 71, 00:12:09.973 "qid": 0, 00:12:09.973 "state": "enabled", 00:12:09.973 "thread": "nvmf_tgt_poll_group_000", 00:12:09.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:09.973 "listen_address": { 00:12:09.973 "trtype": "TCP", 00:12:09.973 "adrfam": "IPv4", 00:12:09.973 "traddr": "10.0.0.3", 00:12:09.973 "trsvcid": "4420" 00:12:09.973 }, 00:12:09.973 "peer_address": { 00:12:09.973 "trtype": "TCP", 00:12:09.973 "adrfam": "IPv4", 00:12:09.973 "traddr": "10.0.0.1", 00:12:09.973 "trsvcid": "60784" 00:12:09.973 }, 00:12:09.973 "auth": { 00:12:09.973 "state": "completed", 00:12:09.973 "digest": "sha384", 00:12:09.973 "dhgroup": "ffdhe3072" 00:12:09.973 } 00:12:09.973 } 00:12:09.973 ]' 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.973 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.539 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:10.539 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:11.472 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.730 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.989 00:12:11.989 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.989 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.989 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.247 { 00:12:12.247 "cntlid": 73, 00:12:12.247 "qid": 0, 00:12:12.247 "state": "enabled", 00:12:12.247 "thread": "nvmf_tgt_poll_group_000", 00:12:12.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:12.247 "listen_address": { 00:12:12.247 "trtype": "TCP", 00:12:12.247 "adrfam": "IPv4", 00:12:12.247 "traddr": "10.0.0.3", 00:12:12.247 "trsvcid": "4420" 00:12:12.247 }, 00:12:12.247 "peer_address": { 00:12:12.247 "trtype": "TCP", 00:12:12.247 "adrfam": "IPv4", 00:12:12.247 "traddr": "10.0.0.1", 00:12:12.247 "trsvcid": "59332" 00:12:12.247 }, 00:12:12.247 "auth": { 00:12:12.247 "state": "completed", 00:12:12.247 "digest": "sha384", 00:12:12.247 "dhgroup": "ffdhe4096" 00:12:12.247 } 00:12:12.247 } 00:12:12.247 ]' 00:12:12.247 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.507 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.766 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:12.766 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:13.334 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.334 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:13.334 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.334 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.592 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.592 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.592 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:13.592 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.852 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.111 00:12:14.111 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.111 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.111 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.678 { 00:12:14.678 "cntlid": 75, 00:12:14.678 "qid": 0, 00:12:14.678 "state": "enabled", 00:12:14.678 "thread": "nvmf_tgt_poll_group_000", 00:12:14.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:14.678 "listen_address": { 00:12:14.678 "trtype": "TCP", 00:12:14.678 "adrfam": "IPv4", 00:12:14.678 "traddr": "10.0.0.3", 00:12:14.678 "trsvcid": "4420" 00:12:14.678 }, 00:12:14.678 "peer_address": { 00:12:14.678 "trtype": "TCP", 00:12:14.678 "adrfam": "IPv4", 00:12:14.678 "traddr": "10.0.0.1", 00:12:14.678 "trsvcid": "59356" 00:12:14.678 }, 00:12:14.678 "auth": { 00:12:14.678 "state": "completed", 00:12:14.678 "digest": "sha384", 00:12:14.678 "dhgroup": "ffdhe4096" 00:12:14.678 } 00:12:14.678 } 00:12:14.678 ]' 00:12:14.678 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.679 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.246 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:15.247 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:15.814 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.073 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.639 00:12:16.639 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.639 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.639 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.898 { 00:12:16.898 "cntlid": 77, 00:12:16.898 "qid": 0, 00:12:16.898 "state": "enabled", 00:12:16.898 "thread": "nvmf_tgt_poll_group_000", 00:12:16.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:16.898 "listen_address": { 00:12:16.898 "trtype": "TCP", 00:12:16.898 "adrfam": "IPv4", 00:12:16.898 "traddr": "10.0.0.3", 00:12:16.898 "trsvcid": "4420" 00:12:16.898 }, 00:12:16.898 "peer_address": { 00:12:16.898 "trtype": "TCP", 00:12:16.898 "adrfam": "IPv4", 00:12:16.898 "traddr": "10.0.0.1", 00:12:16.898 "trsvcid": "59378" 00:12:16.898 }, 00:12:16.898 "auth": { 00:12:16.898 "state": "completed", 00:12:16.898 "digest": "sha384", 00:12:16.898 "dhgroup": "ffdhe4096" 00:12:16.898 } 00:12:16.898 } 00:12:16.898 ]' 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.898 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.467 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:17.467 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:18.039 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.298 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.557 00:12:18.817 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.817 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.817 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.075 { 00:12:19.075 "cntlid": 79, 00:12:19.075 "qid": 0, 00:12:19.075 "state": "enabled", 00:12:19.075 "thread": "nvmf_tgt_poll_group_000", 00:12:19.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:19.075 "listen_address": { 00:12:19.075 "trtype": "TCP", 00:12:19.075 "adrfam": "IPv4", 00:12:19.075 "traddr": "10.0.0.3", 00:12:19.075 "trsvcid": "4420" 00:12:19.075 }, 00:12:19.075 "peer_address": { 00:12:19.075 "trtype": "TCP", 00:12:19.075 "adrfam": "IPv4", 00:12:19.075 "traddr": "10.0.0.1", 00:12:19.075 "trsvcid": "59402" 00:12:19.075 }, 00:12:19.075 "auth": { 00:12:19.075 "state": "completed", 00:12:19.075 "digest": "sha384", 00:12:19.075 "dhgroup": "ffdhe4096" 00:12:19.075 } 00:12:19.075 } 00:12:19.075 ]' 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:19.075 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.075 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.075 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.075 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.642 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:19.642 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.206 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.207 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.774 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.342 00:12:21.342 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.342 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.342 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.602 { 00:12:21.602 "cntlid": 81, 00:12:21.602 "qid": 0, 00:12:21.602 "state": "enabled", 00:12:21.602 "thread": "nvmf_tgt_poll_group_000", 00:12:21.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:21.602 "listen_address": { 00:12:21.602 "trtype": "TCP", 00:12:21.602 "adrfam": "IPv4", 00:12:21.602 "traddr": "10.0.0.3", 00:12:21.602 "trsvcid": "4420" 00:12:21.602 }, 00:12:21.602 "peer_address": { 00:12:21.602 "trtype": "TCP", 00:12:21.602 "adrfam": "IPv4", 00:12:21.602 "traddr": "10.0.0.1", 00:12:21.602 "trsvcid": "37216" 00:12:21.602 }, 00:12:21.602 "auth": { 00:12:21.602 "state": "completed", 00:12:21.602 "digest": "sha384", 00:12:21.602 "dhgroup": "ffdhe6144" 00:12:21.602 } 00:12:21.602 } 00:12:21.602 ]' 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.602 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.861 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:21.861 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.796 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.797 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.797 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.363 00:12:23.363 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.363 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.363 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.621 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.621 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.621 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.621 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.622 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.622 { 00:12:23.622 "cntlid": 83, 00:12:23.622 "qid": 0, 00:12:23.622 "state": "enabled", 00:12:23.622 "thread": "nvmf_tgt_poll_group_000", 00:12:23.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:23.622 "listen_address": { 00:12:23.622 "trtype": "TCP", 00:12:23.622 "adrfam": "IPv4", 00:12:23.622 "traddr": "10.0.0.3", 00:12:23.622 "trsvcid": "4420" 00:12:23.622 }, 00:12:23.622 "peer_address": { 00:12:23.622 "trtype": "TCP", 00:12:23.622 "adrfam": "IPv4", 00:12:23.622 "traddr": "10.0.0.1", 00:12:23.622 "trsvcid": "37240" 00:12:23.622 }, 00:12:23.622 "auth": { 00:12:23.622 "state": "completed", 00:12:23.622 "digest": "sha384", 00:12:23.622 "dhgroup": "ffdhe6144" 00:12:23.622 } 00:12:23.622 } 00:12:23.622 ]' 00:12:23.622 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.880 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.138 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:24.138 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.073 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.332 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.898 00:12:25.898 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.898 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.898 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.157 { 00:12:26.157 "cntlid": 85, 00:12:26.157 "qid": 0, 00:12:26.157 "state": "enabled", 00:12:26.157 "thread": "nvmf_tgt_poll_group_000", 00:12:26.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:26.157 "listen_address": { 00:12:26.157 "trtype": "TCP", 00:12:26.157 "adrfam": "IPv4", 00:12:26.157 "traddr": "10.0.0.3", 00:12:26.157 "trsvcid": "4420" 00:12:26.157 }, 00:12:26.157 "peer_address": { 00:12:26.157 "trtype": "TCP", 00:12:26.157 "adrfam": "IPv4", 00:12:26.157 "traddr": "10.0.0.1", 00:12:26.157 "trsvcid": "37280" 00:12:26.157 }, 00:12:26.157 "auth": { 00:12:26.157 "state": "completed", 00:12:26.157 "digest": "sha384", 00:12:26.157 "dhgroup": "ffdhe6144" 00:12:26.157 } 00:12:26.157 } 00:12:26.157 ]' 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.157 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.157 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.157 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.157 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.157 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.157 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.416 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:26.416 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.383 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.649 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.216 00:12:28.216 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.216 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.217 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.475 { 00:12:28.475 "cntlid": 87, 00:12:28.475 "qid": 0, 00:12:28.475 "state": "enabled", 00:12:28.475 "thread": "nvmf_tgt_poll_group_000", 00:12:28.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:28.475 "listen_address": { 00:12:28.475 "trtype": "TCP", 00:12:28.475 "adrfam": "IPv4", 00:12:28.475 "traddr": "10.0.0.3", 00:12:28.475 "trsvcid": "4420" 00:12:28.475 }, 00:12:28.475 "peer_address": { 00:12:28.475 "trtype": "TCP", 00:12:28.475 "adrfam": "IPv4", 00:12:28.475 "traddr": "10.0.0.1", 00:12:28.475 "trsvcid": "37314" 00:12:28.475 }, 00:12:28.475 "auth": { 00:12:28.475 "state": "completed", 00:12:28.475 "digest": "sha384", 00:12:28.475 "dhgroup": "ffdhe6144" 00:12:28.475 } 00:12:28.475 } 00:12:28.475 ]' 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.475 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.735 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:28.735 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.676 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.613 00:12:30.613 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.613 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.613 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.872 { 00:12:30.872 "cntlid": 89, 00:12:30.872 "qid": 0, 00:12:30.872 "state": "enabled", 00:12:30.872 "thread": "nvmf_tgt_poll_group_000", 00:12:30.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:30.872 "listen_address": { 00:12:30.872 "trtype": "TCP", 00:12:30.872 "adrfam": "IPv4", 00:12:30.872 "traddr": "10.0.0.3", 00:12:30.872 "trsvcid": "4420" 00:12:30.872 }, 00:12:30.872 "peer_address": { 00:12:30.872 "trtype": "TCP", 00:12:30.872 "adrfam": "IPv4", 00:12:30.872 "traddr": "10.0.0.1", 00:12:30.872 "trsvcid": "37348" 00:12:30.872 }, 00:12:30.872 "auth": { 00:12:30.872 "state": "completed", 00:12:30.872 "digest": "sha384", 00:12:30.872 "dhgroup": "ffdhe8192" 00:12:30.872 } 00:12:30.872 } 00:12:30.872 ]' 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.872 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.130 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.130 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.130 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.130 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.130 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.388 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:31.388 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:32.324 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.324 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:32.324 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.324 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.324 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.324 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.324 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.324 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.149 00:12:33.149 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.149 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.149 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.718 { 00:12:33.718 "cntlid": 91, 00:12:33.718 "qid": 0, 00:12:33.718 "state": "enabled", 00:12:33.718 "thread": "nvmf_tgt_poll_group_000", 00:12:33.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:33.718 "listen_address": { 00:12:33.718 "trtype": "TCP", 00:12:33.718 "adrfam": "IPv4", 00:12:33.718 "traddr": "10.0.0.3", 00:12:33.718 "trsvcid": "4420" 00:12:33.718 }, 00:12:33.718 "peer_address": { 00:12:33.718 "trtype": "TCP", 00:12:33.718 "adrfam": "IPv4", 00:12:33.718 "traddr": "10.0.0.1", 00:12:33.718 "trsvcid": "39836" 00:12:33.718 }, 00:12:33.718 "auth": { 00:12:33.718 "state": "completed", 00:12:33.718 "digest": "sha384", 00:12:33.718 "dhgroup": "ffdhe8192" 00:12:33.718 } 00:12:33.718 } 00:12:33.718 ]' 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.718 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.977 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:33.977 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:34.914 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.173 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.740 00:12:35.740 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.740 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.740 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.999 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.999 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.999 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.999 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.258 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.258 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.258 { 00:12:36.258 "cntlid": 93, 00:12:36.258 "qid": 0, 00:12:36.258 "state": "enabled", 00:12:36.258 "thread": "nvmf_tgt_poll_group_000", 00:12:36.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:36.258 "listen_address": { 00:12:36.258 "trtype": "TCP", 00:12:36.258 "adrfam": "IPv4", 00:12:36.258 "traddr": "10.0.0.3", 00:12:36.258 "trsvcid": "4420" 00:12:36.258 }, 00:12:36.258 "peer_address": { 00:12:36.258 "trtype": "TCP", 00:12:36.258 "adrfam": "IPv4", 00:12:36.258 "traddr": "10.0.0.1", 00:12:36.258 "trsvcid": "39864" 00:12:36.258 }, 00:12:36.258 "auth": { 00:12:36.258 "state": "completed", 00:12:36.258 "digest": "sha384", 00:12:36.258 "dhgroup": "ffdhe8192" 00:12:36.258 } 00:12:36.258 } 00:12:36.258 ]' 00:12:36.258 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.258 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.517 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:36.517 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:37.456 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.713 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.648 00:12:38.648 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.648 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.648 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.906 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.906 { 00:12:38.906 "cntlid": 95, 00:12:38.906 "qid": 0, 00:12:38.906 "state": "enabled", 00:12:38.906 "thread": "nvmf_tgt_poll_group_000", 00:12:38.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:38.906 "listen_address": { 00:12:38.906 "trtype": "TCP", 00:12:38.906 "adrfam": "IPv4", 00:12:38.906 "traddr": "10.0.0.3", 00:12:38.906 "trsvcid": "4420" 00:12:38.906 }, 00:12:38.906 "peer_address": { 00:12:38.906 "trtype": "TCP", 00:12:38.906 "adrfam": "IPv4", 00:12:38.906 "traddr": "10.0.0.1", 00:12:38.906 "trsvcid": "39886" 00:12:38.906 }, 00:12:38.906 "auth": { 00:12:38.906 "state": "completed", 00:12:38.906 "digest": "sha384", 00:12:38.906 "dhgroup": "ffdhe8192" 00:12:38.906 } 00:12:38.906 } 00:12:38.906 ]' 00:12:38.907 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.907 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.907 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.907 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.907 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.165 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.165 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.165 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.423 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:39.423 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:40.360 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.360 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.617 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.618 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.875 00:12:40.875 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.875 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.875 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.133 { 00:12:41.133 "cntlid": 97, 00:12:41.133 "qid": 0, 00:12:41.133 "state": "enabled", 00:12:41.133 "thread": "nvmf_tgt_poll_group_000", 00:12:41.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:41.133 "listen_address": { 00:12:41.133 "trtype": "TCP", 00:12:41.133 "adrfam": "IPv4", 00:12:41.133 "traddr": "10.0.0.3", 00:12:41.133 "trsvcid": "4420" 00:12:41.133 }, 00:12:41.133 "peer_address": { 00:12:41.133 "trtype": "TCP", 00:12:41.133 "adrfam": "IPv4", 00:12:41.133 "traddr": "10.0.0.1", 00:12:41.133 "trsvcid": "39906" 00:12:41.133 }, 00:12:41.133 "auth": { 00:12:41.133 "state": "completed", 00:12:41.133 "digest": "sha512", 00:12:41.133 "dhgroup": "null" 00:12:41.133 } 00:12:41.133 } 00:12:41.133 ]' 00:12:41.133 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.391 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.708 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:41.708 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.659 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.917 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.175 00:12:43.175 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.175 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.175 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.741 { 00:12:43.741 "cntlid": 99, 00:12:43.741 "qid": 0, 00:12:43.741 "state": "enabled", 00:12:43.741 "thread": "nvmf_tgt_poll_group_000", 00:12:43.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:43.741 "listen_address": { 00:12:43.741 "trtype": "TCP", 00:12:43.741 "adrfam": "IPv4", 00:12:43.741 "traddr": "10.0.0.3", 00:12:43.741 "trsvcid": "4420" 00:12:43.741 }, 00:12:43.741 "peer_address": { 00:12:43.741 "trtype": "TCP", 00:12:43.741 "adrfam": "IPv4", 00:12:43.741 "traddr": "10.0.0.1", 00:12:43.741 "trsvcid": "42750" 00:12:43.741 }, 00:12:43.741 "auth": { 00:12:43.741 "state": "completed", 00:12:43.741 "digest": "sha512", 00:12:43.741 "dhgroup": "null" 00:12:43.741 } 00:12:43.741 } 00:12:43.741 ]' 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.741 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.000 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:44.000 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.567 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.826 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.393 00:12:45.393 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.393 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.393 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.651 { 00:12:45.651 "cntlid": 101, 00:12:45.651 "qid": 0, 00:12:45.651 "state": "enabled", 00:12:45.651 "thread": "nvmf_tgt_poll_group_000", 00:12:45.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:45.651 "listen_address": { 00:12:45.651 "trtype": "TCP", 00:12:45.651 "adrfam": "IPv4", 00:12:45.651 "traddr": "10.0.0.3", 00:12:45.651 "trsvcid": "4420" 00:12:45.651 }, 00:12:45.651 "peer_address": { 00:12:45.651 "trtype": "TCP", 00:12:45.651 "adrfam": "IPv4", 00:12:45.651 "traddr": "10.0.0.1", 00:12:45.651 "trsvcid": "42782" 00:12:45.651 }, 00:12:45.651 "auth": { 00:12:45.651 "state": "completed", 00:12:45.651 "digest": "sha512", 00:12:45.651 "dhgroup": "null" 00:12:45.651 } 00:12:45.651 } 00:12:45.651 ]' 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.651 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.909 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:45.909 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.854 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.112 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.113 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.113 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.113 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.113 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.371 00:12:47.371 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.371 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.371 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.630 { 00:12:47.630 "cntlid": 103, 00:12:47.630 "qid": 0, 00:12:47.630 "state": "enabled", 00:12:47.630 "thread": "nvmf_tgt_poll_group_000", 00:12:47.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:47.630 "listen_address": { 00:12:47.630 "trtype": "TCP", 00:12:47.630 "adrfam": "IPv4", 00:12:47.630 "traddr": "10.0.0.3", 00:12:47.630 "trsvcid": "4420" 00:12:47.630 }, 00:12:47.630 "peer_address": { 00:12:47.630 "trtype": "TCP", 00:12:47.630 "adrfam": "IPv4", 00:12:47.630 "traddr": "10.0.0.1", 00:12:47.630 "trsvcid": "42816" 00:12:47.630 }, 00:12:47.630 "auth": { 00:12:47.630 "state": "completed", 00:12:47.630 "digest": "sha512", 00:12:47.630 "dhgroup": "null" 00:12:47.630 } 00:12:47.630 } 00:12:47.630 ]' 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:47.630 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.889 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.889 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.889 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.147 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:48.147 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.713 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.279 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.536 00:12:49.536 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.536 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.537 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.794 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.794 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.794 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.794 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.052 { 00:12:50.052 "cntlid": 105, 00:12:50.052 "qid": 0, 00:12:50.052 "state": "enabled", 00:12:50.052 "thread": "nvmf_tgt_poll_group_000", 00:12:50.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:50.052 "listen_address": { 00:12:50.052 "trtype": "TCP", 00:12:50.052 "adrfam": "IPv4", 00:12:50.052 "traddr": "10.0.0.3", 00:12:50.052 "trsvcid": "4420" 00:12:50.052 }, 00:12:50.052 "peer_address": { 00:12:50.052 "trtype": "TCP", 00:12:50.052 "adrfam": "IPv4", 00:12:50.052 "traddr": "10.0.0.1", 00:12:50.052 "trsvcid": "42836" 00:12:50.052 }, 00:12:50.052 "auth": { 00:12:50.052 "state": "completed", 00:12:50.052 "digest": "sha512", 00:12:50.052 "dhgroup": "ffdhe2048" 00:12:50.052 } 00:12:50.052 } 00:12:50.052 ]' 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.052 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.364 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:50.364 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:51.314 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.572 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.137 00:12:52.137 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.137 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.137 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.395 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.395 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.395 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.395 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.395 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.396 { 00:12:52.396 "cntlid": 107, 00:12:52.396 "qid": 0, 00:12:52.396 "state": "enabled", 00:12:52.396 "thread": "nvmf_tgt_poll_group_000", 00:12:52.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:52.396 "listen_address": { 00:12:52.396 "trtype": "TCP", 00:12:52.396 "adrfam": "IPv4", 00:12:52.396 "traddr": "10.0.0.3", 00:12:52.396 "trsvcid": "4420" 00:12:52.396 }, 00:12:52.396 "peer_address": { 00:12:52.396 "trtype": "TCP", 00:12:52.396 "adrfam": "IPv4", 00:12:52.396 "traddr": "10.0.0.1", 00:12:52.396 "trsvcid": "47178" 00:12:52.396 }, 00:12:52.396 "auth": { 00:12:52.396 "state": "completed", 00:12:52.396 "digest": "sha512", 00:12:52.396 "dhgroup": "ffdhe2048" 00:12:52.396 } 00:12:52.396 } 00:12:52.396 ]' 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.396 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.961 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:52.962 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:53.528 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.787 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.045 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.045 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.045 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.045 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.304 00:12:54.304 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.304 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.304 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.562 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.562 { 00:12:54.562 "cntlid": 109, 00:12:54.562 "qid": 0, 00:12:54.562 "state": "enabled", 00:12:54.562 "thread": "nvmf_tgt_poll_group_000", 00:12:54.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:54.562 "listen_address": { 00:12:54.563 "trtype": "TCP", 00:12:54.563 "adrfam": "IPv4", 00:12:54.563 "traddr": "10.0.0.3", 00:12:54.563 "trsvcid": "4420" 00:12:54.563 }, 00:12:54.563 "peer_address": { 00:12:54.563 "trtype": "TCP", 00:12:54.563 "adrfam": "IPv4", 00:12:54.563 "traddr": "10.0.0.1", 00:12:54.563 "trsvcid": "47196" 00:12:54.563 }, 00:12:54.563 "auth": { 00:12:54.563 "state": "completed", 00:12:54.563 "digest": "sha512", 00:12:54.563 "dhgroup": "ffdhe2048" 00:12:54.563 } 00:12:54.563 } 00:12:54.563 ]' 00:12:54.563 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.823 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.083 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:55.083 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.017 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.584 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.149 00:12:57.149 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.149 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.149 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.419 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.420 { 00:12:57.420 "cntlid": 111, 00:12:57.420 "qid": 0, 00:12:57.420 "state": "enabled", 00:12:57.420 "thread": "nvmf_tgt_poll_group_000", 00:12:57.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:57.420 "listen_address": { 00:12:57.420 "trtype": "TCP", 00:12:57.420 "adrfam": "IPv4", 00:12:57.420 "traddr": "10.0.0.3", 00:12:57.420 "trsvcid": "4420" 00:12:57.420 }, 00:12:57.420 "peer_address": { 00:12:57.420 "trtype": "TCP", 00:12:57.420 "adrfam": "IPv4", 00:12:57.420 "traddr": "10.0.0.1", 00:12:57.420 "trsvcid": "47204" 00:12:57.420 }, 00:12:57.420 "auth": { 00:12:57.420 "state": "completed", 00:12:57.420 "digest": "sha512", 00:12:57.420 "dhgroup": "ffdhe2048" 00:12:57.420 } 00:12:57.420 } 00:12:57.420 ]' 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.420 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.985 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:57.985 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.552 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.811 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.377 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.377 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.638 { 00:12:59.638 "cntlid": 113, 00:12:59.638 "qid": 0, 00:12:59.638 "state": "enabled", 00:12:59.638 "thread": "nvmf_tgt_poll_group_000", 00:12:59.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:12:59.638 "listen_address": { 00:12:59.638 "trtype": "TCP", 00:12:59.638 "adrfam": "IPv4", 00:12:59.638 "traddr": "10.0.0.3", 00:12:59.638 "trsvcid": "4420" 00:12:59.638 }, 00:12:59.638 "peer_address": { 00:12:59.638 "trtype": "TCP", 00:12:59.638 "adrfam": "IPv4", 00:12:59.638 "traddr": "10.0.0.1", 00:12:59.638 "trsvcid": "47240" 00:12:59.638 }, 00:12:59.638 "auth": { 00:12:59.638 "state": "completed", 00:12:59.638 "digest": "sha512", 00:12:59.638 "dhgroup": "ffdhe3072" 00:12:59.638 } 00:12:59.638 } 00:12:59.638 ]' 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.638 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.906 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:12:59.906 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:00.473 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.473 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:00.473 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.473 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.732 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.732 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.732 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.732 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.990 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.249 00:13:01.249 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.249 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.249 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.508 { 00:13:01.508 "cntlid": 115, 00:13:01.508 "qid": 0, 00:13:01.508 "state": "enabled", 00:13:01.508 "thread": "nvmf_tgt_poll_group_000", 00:13:01.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:01.508 "listen_address": { 00:13:01.508 "trtype": "TCP", 00:13:01.508 "adrfam": "IPv4", 00:13:01.508 "traddr": "10.0.0.3", 00:13:01.508 "trsvcid": "4420" 00:13:01.508 }, 00:13:01.508 "peer_address": { 00:13:01.508 "trtype": "TCP", 00:13:01.508 "adrfam": "IPv4", 00:13:01.508 "traddr": "10.0.0.1", 00:13:01.508 "trsvcid": "35610" 00:13:01.508 }, 00:13:01.508 "auth": { 00:13:01.508 "state": "completed", 00:13:01.508 "digest": "sha512", 00:13:01.508 "dhgroup": "ffdhe3072" 00:13:01.508 } 00:13:01.508 } 00:13:01.508 ]' 00:13:01.508 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.768 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.027 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:02.027 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:02.595 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.595 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:02.595 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.595 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.853 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.853 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:02.853 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.113 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.372 00:13:03.372 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.372 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.372 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.939 { 00:13:03.939 "cntlid": 117, 00:13:03.939 "qid": 0, 00:13:03.939 "state": "enabled", 00:13:03.939 "thread": "nvmf_tgt_poll_group_000", 00:13:03.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:03.939 "listen_address": { 00:13:03.939 "trtype": "TCP", 00:13:03.939 "adrfam": "IPv4", 00:13:03.939 "traddr": "10.0.0.3", 00:13:03.939 "trsvcid": "4420" 00:13:03.939 }, 00:13:03.939 "peer_address": { 00:13:03.939 "trtype": "TCP", 00:13:03.939 "adrfam": "IPv4", 00:13:03.939 "traddr": "10.0.0.1", 00:13:03.939 "trsvcid": "35622" 00:13:03.939 }, 00:13:03.939 "auth": { 00:13:03.939 "state": "completed", 00:13:03.939 "digest": "sha512", 00:13:03.939 "dhgroup": "ffdhe3072" 00:13:03.939 } 00:13:03.939 } 00:13:03.939 ]' 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.939 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.198 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:04.198 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.150 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.150 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.716 00:13:05.716 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.716 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.716 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.974 { 00:13:05.974 "cntlid": 119, 00:13:05.974 "qid": 0, 00:13:05.974 "state": "enabled", 00:13:05.974 "thread": "nvmf_tgt_poll_group_000", 00:13:05.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:05.974 "listen_address": { 00:13:05.974 "trtype": "TCP", 00:13:05.974 "adrfam": "IPv4", 00:13:05.974 "traddr": "10.0.0.3", 00:13:05.974 "trsvcid": "4420" 00:13:05.974 }, 00:13:05.974 "peer_address": { 00:13:05.974 "trtype": "TCP", 00:13:05.974 "adrfam": "IPv4", 00:13:05.974 "traddr": "10.0.0.1", 00:13:05.974 "trsvcid": "35640" 00:13:05.974 }, 00:13:05.974 "auth": { 00:13:05.974 "state": "completed", 00:13:05.974 "digest": "sha512", 00:13:05.974 "dhgroup": "ffdhe3072" 00:13:05.974 } 00:13:05.974 } 00:13:05.974 ]' 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.974 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.975 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.975 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.975 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.975 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.975 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.233 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:06.233 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:07.168 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.168 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:07.168 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.169 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.169 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.734 00:13:07.734 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.734 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.734 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.992 { 00:13:07.992 "cntlid": 121, 00:13:07.992 "qid": 0, 00:13:07.992 "state": "enabled", 00:13:07.992 "thread": "nvmf_tgt_poll_group_000", 00:13:07.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:07.992 "listen_address": { 00:13:07.992 "trtype": "TCP", 00:13:07.992 "adrfam": "IPv4", 00:13:07.992 "traddr": "10.0.0.3", 00:13:07.992 "trsvcid": "4420" 00:13:07.992 }, 00:13:07.992 "peer_address": { 00:13:07.992 "trtype": "TCP", 00:13:07.992 "adrfam": "IPv4", 00:13:07.992 "traddr": "10.0.0.1", 00:13:07.992 "trsvcid": "35686" 00:13:07.992 }, 00:13:07.992 "auth": { 00:13:07.992 "state": "completed", 00:13:07.992 "digest": "sha512", 00:13:07.992 "dhgroup": "ffdhe4096" 00:13:07.992 } 00:13:07.992 } 00:13:07.992 ]' 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.992 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.251 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:08.251 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.187 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.188 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.446 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.704 00:13:09.704 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.704 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.704 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.961 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.961 { 00:13:09.961 "cntlid": 123, 00:13:09.961 "qid": 0, 00:13:09.961 "state": "enabled", 00:13:09.962 "thread": "nvmf_tgt_poll_group_000", 00:13:09.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:09.962 "listen_address": { 00:13:09.962 "trtype": "TCP", 00:13:09.962 "adrfam": "IPv4", 00:13:09.962 "traddr": "10.0.0.3", 00:13:09.962 "trsvcid": "4420" 00:13:09.962 }, 00:13:09.962 "peer_address": { 00:13:09.962 "trtype": "TCP", 00:13:09.962 "adrfam": "IPv4", 00:13:09.962 "traddr": "10.0.0.1", 00:13:09.962 "trsvcid": "35712" 00:13:09.962 }, 00:13:09.962 "auth": { 00:13:09.962 "state": "completed", 00:13:09.962 "digest": "sha512", 00:13:09.962 "dhgroup": "ffdhe4096" 00:13:09.962 } 00:13:09.962 } 00:13:09.962 ]' 00:13:09.962 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.220 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.220 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.220 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.220 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.220 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.220 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.220 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.478 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:10.478 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.499 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.759 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.325 00:13:12.325 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.325 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.325 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.583 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.584 { 00:13:12.584 "cntlid": 125, 00:13:12.584 "qid": 0, 00:13:12.584 "state": "enabled", 00:13:12.584 "thread": "nvmf_tgt_poll_group_000", 00:13:12.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:12.584 "listen_address": { 00:13:12.584 "trtype": "TCP", 00:13:12.584 "adrfam": "IPv4", 00:13:12.584 "traddr": "10.0.0.3", 00:13:12.584 "trsvcid": "4420" 00:13:12.584 }, 00:13:12.584 "peer_address": { 00:13:12.584 "trtype": "TCP", 00:13:12.584 "adrfam": "IPv4", 00:13:12.584 "traddr": "10.0.0.1", 00:13:12.584 "trsvcid": "40368" 00:13:12.584 }, 00:13:12.584 "auth": { 00:13:12.584 "state": "completed", 00:13:12.584 "digest": "sha512", 00:13:12.584 "dhgroup": "ffdhe4096" 00:13:12.584 } 00:13:12.584 } 00:13:12.584 ]' 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.584 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.150 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:13.150 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.717 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.975 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.542 00:13:14.542 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.542 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.542 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.800 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.800 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.800 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.800 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.800 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.801 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.801 { 00:13:14.801 "cntlid": 127, 00:13:14.801 "qid": 0, 00:13:14.801 "state": "enabled", 00:13:14.801 "thread": "nvmf_tgt_poll_group_000", 00:13:14.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:14.801 "listen_address": { 00:13:14.801 "trtype": "TCP", 00:13:14.801 "adrfam": "IPv4", 00:13:14.801 "traddr": "10.0.0.3", 00:13:14.801 "trsvcid": "4420" 00:13:14.801 }, 00:13:14.801 "peer_address": { 00:13:14.801 "trtype": "TCP", 00:13:14.801 "adrfam": "IPv4", 00:13:14.801 "traddr": "10.0.0.1", 00:13:14.801 "trsvcid": "40388" 00:13:14.801 }, 00:13:14.801 "auth": { 00:13:14.801 "state": "completed", 00:13:14.801 "digest": "sha512", 00:13:14.801 "dhgroup": "ffdhe4096" 00:13:14.801 } 00:13:14.801 } 00:13:14.801 ]' 00:13:14.801 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.801 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.801 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.059 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.059 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.059 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.059 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.059 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.318 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:15.318 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:16.317 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.317 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.883 00:13:16.883 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.883 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.883 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.142 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.142 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.142 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.142 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.142 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.142 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.142 { 00:13:17.142 "cntlid": 129, 00:13:17.142 "qid": 0, 00:13:17.142 "state": "enabled", 00:13:17.142 "thread": "nvmf_tgt_poll_group_000", 00:13:17.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:17.142 "listen_address": { 00:13:17.142 "trtype": "TCP", 00:13:17.142 "adrfam": "IPv4", 00:13:17.142 "traddr": "10.0.0.3", 00:13:17.142 "trsvcid": "4420" 00:13:17.142 }, 00:13:17.142 "peer_address": { 00:13:17.142 "trtype": "TCP", 00:13:17.142 "adrfam": "IPv4", 00:13:17.142 "traddr": "10.0.0.1", 00:13:17.142 "trsvcid": "40424" 00:13:17.142 }, 00:13:17.142 "auth": { 00:13:17.142 "state": "completed", 00:13:17.142 "digest": "sha512", 00:13:17.142 "dhgroup": "ffdhe6144" 00:13:17.142 } 00:13:17.142 } 00:13:17.142 ]' 00:13:17.142 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.142 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.142 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.401 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:17.401 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.401 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.401 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.401 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.660 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:17.660 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:18.226 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.485 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.051 00:13:19.051 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.051 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.051 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.309 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.309 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.309 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.309 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.568 { 00:13:19.568 "cntlid": 131, 00:13:19.568 "qid": 0, 00:13:19.568 "state": "enabled", 00:13:19.568 "thread": "nvmf_tgt_poll_group_000", 00:13:19.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:19.568 "listen_address": { 00:13:19.568 "trtype": "TCP", 00:13:19.568 "adrfam": "IPv4", 00:13:19.568 "traddr": "10.0.0.3", 00:13:19.568 "trsvcid": "4420" 00:13:19.568 }, 00:13:19.568 "peer_address": { 00:13:19.568 "trtype": "TCP", 00:13:19.568 "adrfam": "IPv4", 00:13:19.568 "traddr": "10.0.0.1", 00:13:19.568 "trsvcid": "40464" 00:13:19.568 }, 00:13:19.568 "auth": { 00:13:19.568 "state": "completed", 00:13:19.568 "digest": "sha512", 00:13:19.568 "dhgroup": "ffdhe6144" 00:13:19.568 } 00:13:19.568 } 00:13:19.568 ]' 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.568 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.826 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:19.826 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.761 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.762 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.762 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.762 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.762 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.762 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.327 00:13:21.327 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.327 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.327 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.592 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.592 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.593 { 00:13:21.593 "cntlid": 133, 00:13:21.593 "qid": 0, 00:13:21.593 "state": "enabled", 00:13:21.593 "thread": "nvmf_tgt_poll_group_000", 00:13:21.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:21.593 "listen_address": { 00:13:21.593 "trtype": "TCP", 00:13:21.593 "adrfam": "IPv4", 00:13:21.593 "traddr": "10.0.0.3", 00:13:21.593 "trsvcid": "4420" 00:13:21.593 }, 00:13:21.593 "peer_address": { 00:13:21.593 "trtype": "TCP", 00:13:21.593 "adrfam": "IPv4", 00:13:21.593 "traddr": "10.0.0.1", 00:13:21.593 "trsvcid": "55508" 00:13:21.593 }, 00:13:21.593 "auth": { 00:13:21.593 "state": "completed", 00:13:21.593 "digest": "sha512", 00:13:21.593 "dhgroup": "ffdhe6144" 00:13:21.593 } 00:13:21.593 } 00:13:21.593 ]' 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:21.593 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.852 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.852 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.852 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.109 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:22.109 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:22.676 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.243 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:23.244 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.244 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.503 00:13:23.503 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.503 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.503 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.070 { 00:13:24.070 "cntlid": 135, 00:13:24.070 "qid": 0, 00:13:24.070 "state": "enabled", 00:13:24.070 "thread": "nvmf_tgt_poll_group_000", 00:13:24.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:24.070 "listen_address": { 00:13:24.070 "trtype": "TCP", 00:13:24.070 "adrfam": "IPv4", 00:13:24.070 "traddr": "10.0.0.3", 00:13:24.070 "trsvcid": "4420" 00:13:24.070 }, 00:13:24.070 "peer_address": { 00:13:24.070 "trtype": "TCP", 00:13:24.070 "adrfam": "IPv4", 00:13:24.070 "traddr": "10.0.0.1", 00:13:24.070 "trsvcid": "55542" 00:13:24.070 }, 00:13:24.070 "auth": { 00:13:24.070 "state": "completed", 00:13:24.070 "digest": "sha512", 00:13:24.070 "dhgroup": "ffdhe6144" 00:13:24.070 } 00:13:24.070 } 00:13:24.070 ]' 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.070 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.328 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:24.328 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.263 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.521 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.088 00:13:26.088 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.088 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.088 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.347 { 00:13:26.347 "cntlid": 137, 00:13:26.347 "qid": 0, 00:13:26.347 "state": "enabled", 00:13:26.347 "thread": "nvmf_tgt_poll_group_000", 00:13:26.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:26.347 "listen_address": { 00:13:26.347 "trtype": "TCP", 00:13:26.347 "adrfam": "IPv4", 00:13:26.347 "traddr": "10.0.0.3", 00:13:26.347 "trsvcid": "4420" 00:13:26.347 }, 00:13:26.347 "peer_address": { 00:13:26.347 "trtype": "TCP", 00:13:26.347 "adrfam": "IPv4", 00:13:26.347 "traddr": "10.0.0.1", 00:13:26.347 "trsvcid": "55576" 00:13:26.347 }, 00:13:26.347 "auth": { 00:13:26.347 "state": "completed", 00:13:26.347 "digest": "sha512", 00:13:26.347 "dhgroup": "ffdhe8192" 00:13:26.347 } 00:13:26.347 } 00:13:26.347 ]' 00:13:26.347 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.606 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.864 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:26.864 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.799 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.058 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.625 00:13:28.625 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.625 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.625 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.883 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.883 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.883 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.883 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.141 { 00:13:29.141 "cntlid": 139, 00:13:29.141 "qid": 0, 00:13:29.141 "state": "enabled", 00:13:29.141 "thread": "nvmf_tgt_poll_group_000", 00:13:29.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:29.141 "listen_address": { 00:13:29.141 "trtype": "TCP", 00:13:29.141 "adrfam": "IPv4", 00:13:29.141 "traddr": "10.0.0.3", 00:13:29.141 "trsvcid": "4420" 00:13:29.141 }, 00:13:29.141 "peer_address": { 00:13:29.141 "trtype": "TCP", 00:13:29.141 "adrfam": "IPv4", 00:13:29.141 "traddr": "10.0.0.1", 00:13:29.141 "trsvcid": "55606" 00:13:29.141 }, 00:13:29.141 "auth": { 00:13:29.141 "state": "completed", 00:13:29.141 "digest": "sha512", 00:13:29.141 "dhgroup": "ffdhe8192" 00:13:29.141 } 00:13:29.141 } 00:13:29.141 ]' 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.141 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.400 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:29.400 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: --dhchap-ctrl-secret DHHC-1:02:NjE3ZGJjMmIwZjljZDI2MTZmYWJiMGM3NTUzMThjNDk5MGVhZWE3MzFmYTAzMDIwzlhFPQ==: 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.334 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.593 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.162 00:13:31.162 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.162 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.162 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.420 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.421 { 00:13:31.421 "cntlid": 141, 00:13:31.421 "qid": 0, 00:13:31.421 "state": "enabled", 00:13:31.421 "thread": "nvmf_tgt_poll_group_000", 00:13:31.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:31.421 "listen_address": { 00:13:31.421 "trtype": "TCP", 00:13:31.421 "adrfam": "IPv4", 00:13:31.421 "traddr": "10.0.0.3", 00:13:31.421 "trsvcid": "4420" 00:13:31.421 }, 00:13:31.421 "peer_address": { 00:13:31.421 "trtype": "TCP", 00:13:31.421 "adrfam": "IPv4", 00:13:31.421 "traddr": "10.0.0.1", 00:13:31.421 "trsvcid": "55628" 00:13:31.421 }, 00:13:31.421 "auth": { 00:13:31.421 "state": "completed", 00:13:31.421 "digest": "sha512", 00:13:31.421 "dhgroup": "ffdhe8192" 00:13:31.421 } 00:13:31.421 } 00:13:31.421 ]' 00:13:31.421 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.679 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.679 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.679 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.680 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.680 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.680 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.680 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.938 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:31.938 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:01:NGY5ZmMyN2JiZTcwMmVjYmM3MWQ5OWNlNTk1NTY1ZmFhJKB1: 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:32.872 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.130 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.751 00:13:33.751 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.751 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.751 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.009 { 00:13:34.009 "cntlid": 143, 00:13:34.009 "qid": 0, 00:13:34.009 "state": "enabled", 00:13:34.009 "thread": "nvmf_tgt_poll_group_000", 00:13:34.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:34.009 "listen_address": { 00:13:34.009 "trtype": "TCP", 00:13:34.009 "adrfam": "IPv4", 00:13:34.009 "traddr": "10.0.0.3", 00:13:34.009 "trsvcid": "4420" 00:13:34.009 }, 00:13:34.009 "peer_address": { 00:13:34.009 "trtype": "TCP", 00:13:34.009 "adrfam": "IPv4", 00:13:34.009 "traddr": "10.0.0.1", 00:13:34.009 "trsvcid": "42806" 00:13:34.009 }, 00:13:34.009 "auth": { 00:13:34.009 "state": "completed", 00:13:34.009 "digest": "sha512", 00:13:34.009 "dhgroup": "ffdhe8192" 00:13:34.009 } 00:13:34.009 } 00:13:34.009 ]' 00:13:34.009 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.267 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.523 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:34.523 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:35.534 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.103 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.669 00:13:36.669 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.669 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.669 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.234 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.234 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.234 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.234 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.234 { 00:13:37.234 "cntlid": 145, 00:13:37.234 "qid": 0, 00:13:37.234 "state": "enabled", 00:13:37.234 "thread": "nvmf_tgt_poll_group_000", 00:13:37.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:37.234 "listen_address": { 00:13:37.234 "trtype": "TCP", 00:13:37.234 "adrfam": "IPv4", 00:13:37.234 "traddr": "10.0.0.3", 00:13:37.234 "trsvcid": "4420" 00:13:37.234 }, 00:13:37.234 "peer_address": { 00:13:37.234 "trtype": "TCP", 00:13:37.234 "adrfam": "IPv4", 00:13:37.234 "traddr": "10.0.0.1", 00:13:37.234 "trsvcid": "42838" 00:13:37.234 }, 00:13:37.234 "auth": { 00:13:37.234 "state": "completed", 00:13:37.234 "digest": "sha512", 00:13:37.234 "dhgroup": "ffdhe8192" 00:13:37.234 } 00:13:37.234 } 00:13:37.234 ]' 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.234 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.797 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:37.797 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:00:MmE1MmU1YWYzNzhjZjM2YWY5ZDJhNDUwMjdkODhmY2QzMmQxMTVkODQ5ZjA4Nzc0fI2SMA==: --dhchap-ctrl-secret DHHC-1:03:NDZmYTg2ZDJkZmMyYjIyOWIzOTMzNDI3MTk3ZjAyM2VmN2I4ZTg1NmNmNDVhMjZlOWFkZTQwZWQyYjg2NzE5MJqKdbk=: 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.363 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:38.621 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:39.188 request: 00:13:39.188 { 00:13:39.188 "name": "nvme0", 00:13:39.188 "trtype": "tcp", 00:13:39.189 "traddr": "10.0.0.3", 00:13:39.189 "adrfam": "ipv4", 00:13:39.189 "trsvcid": "4420", 00:13:39.189 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:39.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:39.189 "prchk_reftag": false, 00:13:39.189 "prchk_guard": false, 00:13:39.189 "hdgst": false, 00:13:39.189 "ddgst": false, 00:13:39.189 "dhchap_key": "key2", 00:13:39.189 "allow_unrecognized_csi": false, 00:13:39.189 "method": "bdev_nvme_attach_controller", 00:13:39.189 "req_id": 1 00:13:39.189 } 00:13:39.189 Got JSON-RPC error response 00:13:39.189 response: 00:13:39.189 { 00:13:39.189 "code": -5, 00:13:39.189 "message": "Input/output error" 00:13:39.189 } 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:39.189 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:39.757 request: 00:13:39.757 { 00:13:39.757 "name": "nvme0", 00:13:39.757 "trtype": "tcp", 00:13:39.757 "traddr": "10.0.0.3", 00:13:39.757 "adrfam": "ipv4", 00:13:39.757 "trsvcid": "4420", 00:13:39.757 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:39.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:39.757 "prchk_reftag": false, 00:13:39.757 "prchk_guard": false, 00:13:39.757 "hdgst": false, 00:13:39.757 "ddgst": false, 00:13:39.757 "dhchap_key": "key1", 00:13:39.757 "dhchap_ctrlr_key": "ckey2", 00:13:39.757 "allow_unrecognized_csi": false, 00:13:39.757 "method": "bdev_nvme_attach_controller", 00:13:39.757 "req_id": 1 00:13:39.757 } 00:13:39.757 Got JSON-RPC error response 00:13:39.757 response: 00:13:39.757 { 00:13:39.757 "code": -5, 00:13:39.757 "message": "Input/output error" 00:13:39.757 } 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.757 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.714 request: 00:13:40.714 { 00:13:40.714 "name": "nvme0", 00:13:40.714 "trtype": "tcp", 00:13:40.714 "traddr": "10.0.0.3", 00:13:40.714 "adrfam": "ipv4", 00:13:40.714 "trsvcid": "4420", 00:13:40.714 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:40.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:40.714 "prchk_reftag": false, 00:13:40.714 "prchk_guard": false, 00:13:40.714 "hdgst": false, 00:13:40.714 "ddgst": false, 00:13:40.714 "dhchap_key": "key1", 00:13:40.714 "dhchap_ctrlr_key": "ckey1", 00:13:40.714 "allow_unrecognized_csi": false, 00:13:40.714 "method": "bdev_nvme_attach_controller", 00:13:40.714 "req_id": 1 00:13:40.714 } 00:13:40.714 Got JSON-RPC error response 00:13:40.714 response: 00:13:40.714 { 00:13:40.714 "code": -5, 00:13:40.714 "message": "Input/output error" 00:13:40.714 } 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67488 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67488 ']' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67488 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67488 00:13:40.714 killing process with pid 67488 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67488' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67488 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67488 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70702 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70702 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70702 ']' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.714 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70702 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70702 ']' 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.025 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.592 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.592 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:41.592 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:41.592 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 null0 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sar 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ED2 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ED2 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8TY 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IoB ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IoB 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.STL 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.5zD ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zD 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UuZ 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.593 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.530 nvme0n1 00:13:42.788 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.788 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.788 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.047 { 00:13:43.047 "cntlid": 1, 00:13:43.047 "qid": 0, 00:13:43.047 "state": "enabled", 00:13:43.047 "thread": "nvmf_tgt_poll_group_000", 00:13:43.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:43.047 "listen_address": { 00:13:43.047 "trtype": "TCP", 00:13:43.047 "adrfam": "IPv4", 00:13:43.047 "traddr": "10.0.0.3", 00:13:43.047 "trsvcid": "4420" 00:13:43.047 }, 00:13:43.047 "peer_address": { 00:13:43.047 "trtype": "TCP", 00:13:43.047 "adrfam": "IPv4", 00:13:43.047 "traddr": "10.0.0.1", 00:13:43.047 "trsvcid": "35450" 00:13:43.047 }, 00:13:43.047 "auth": { 00:13:43.047 "state": "completed", 00:13:43.047 "digest": "sha512", 00:13:43.047 "dhgroup": "ffdhe8192" 00:13:43.047 } 00:13:43.047 } 00:13:43.047 ]' 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:43.047 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.047 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.047 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.047 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.614 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:43.614 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key3 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:44.182 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:44.748 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.749 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.007 request: 00:13:45.007 { 00:13:45.007 "name": "nvme0", 00:13:45.007 "trtype": "tcp", 00:13:45.007 "traddr": "10.0.0.3", 00:13:45.007 "adrfam": "ipv4", 00:13:45.007 "trsvcid": "4420", 00:13:45.007 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:45.007 "prchk_reftag": false, 00:13:45.007 "prchk_guard": false, 00:13:45.007 "hdgst": false, 00:13:45.007 "ddgst": false, 00:13:45.007 "dhchap_key": "key3", 00:13:45.007 "allow_unrecognized_csi": false, 00:13:45.007 "method": "bdev_nvme_attach_controller", 00:13:45.007 "req_id": 1 00:13:45.007 } 00:13:45.007 Got JSON-RPC error response 00:13:45.007 response: 00:13:45.007 { 00:13:45.007 "code": -5, 00:13:45.007 "message": "Input/output error" 00:13:45.007 } 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:45.007 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.267 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.525 request: 00:13:45.525 { 00:13:45.525 "name": "nvme0", 00:13:45.525 "trtype": "tcp", 00:13:45.525 "traddr": "10.0.0.3", 00:13:45.525 "adrfam": "ipv4", 00:13:45.525 "trsvcid": "4420", 00:13:45.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:45.525 "prchk_reftag": false, 00:13:45.525 "prchk_guard": false, 00:13:45.525 "hdgst": false, 00:13:45.525 "ddgst": false, 00:13:45.525 "dhchap_key": "key3", 00:13:45.525 "allow_unrecognized_csi": false, 00:13:45.525 "method": "bdev_nvme_attach_controller", 00:13:45.525 "req_id": 1 00:13:45.525 } 00:13:45.525 Got JSON-RPC error response 00:13:45.525 response: 00:13:45.525 { 00:13:45.526 "code": -5, 00:13:45.526 "message": "Input/output error" 00:13:45.526 } 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:45.526 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:45.784 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:46.353 request: 00:13:46.353 { 00:13:46.353 "name": "nvme0", 00:13:46.353 "trtype": "tcp", 00:13:46.353 "traddr": "10.0.0.3", 00:13:46.353 "adrfam": "ipv4", 00:13:46.353 "trsvcid": "4420", 00:13:46.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:46.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:46.353 "prchk_reftag": false, 00:13:46.353 "prchk_guard": false, 00:13:46.353 "hdgst": false, 00:13:46.353 "ddgst": false, 00:13:46.353 "dhchap_key": "key0", 00:13:46.353 "dhchap_ctrlr_key": "key1", 00:13:46.353 "allow_unrecognized_csi": false, 00:13:46.353 "method": "bdev_nvme_attach_controller", 00:13:46.353 "req_id": 1 00:13:46.353 } 00:13:46.353 Got JSON-RPC error response 00:13:46.353 response: 00:13:46.353 { 00:13:46.353 "code": -5, 00:13:46.353 "message": "Input/output error" 00:13:46.353 } 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:46.353 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:46.612 nvme0n1 00:13:46.612 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:46.612 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:46.612 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.180 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.180 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.180 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.180 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 00:13:47.180 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.442 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.442 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.442 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:47.442 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:47.442 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:48.426 nvme0n1 00:13:48.426 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:48.426 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:48.426 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.685 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:48.944 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.944 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:48.944 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid 8ff08136-65da-4f4c-b769-a07096c587b5 -l 0 --dhchap-secret DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: --dhchap-ctrl-secret DHHC-1:03:ODIzOGU5NTUyNGJiY2U4ZmMyMTUyZmI3MDBjZjBkNGY0MTRkZTAzOGNjMmEzNmQ4NjlhYjgzMjBiMGM3ZWNmZoy9wjI=: 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.510 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:50.085 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:50.662 request: 00:13:50.662 { 00:13:50.662 "name": "nvme0", 00:13:50.662 "trtype": "tcp", 00:13:50.662 "traddr": "10.0.0.3", 00:13:50.662 "adrfam": "ipv4", 00:13:50.662 "trsvcid": "4420", 00:13:50.662 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:50.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5", 00:13:50.662 "prchk_reftag": false, 00:13:50.662 "prchk_guard": false, 00:13:50.662 "hdgst": false, 00:13:50.662 "ddgst": false, 00:13:50.662 "dhchap_key": "key1", 00:13:50.662 "allow_unrecognized_csi": false, 00:13:50.662 "method": "bdev_nvme_attach_controller", 00:13:50.662 "req_id": 1 00:13:50.662 } 00:13:50.662 Got JSON-RPC error response 00:13:50.662 response: 00:13:50.662 { 00:13:50.662 "code": -5, 00:13:50.662 "message": "Input/output error" 00:13:50.662 } 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:50.662 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:51.597 nvme0n1 00:13:51.597 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:51.597 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:51.597 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.856 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.856 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.856 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:52.424 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:52.682 nvme0n1 00:13:52.682 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:52.682 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.682 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:52.940 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.940 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.940 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.197 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:53.197 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: '' 2s 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: ]] 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDkwYTQxNTg1NWNjNGNmZmEzZDA4ODM1ZTcxZmE4YzUC6tmf: 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:53.198 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: 2s 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:55.724 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: ]] 00:13:55.725 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTNiOWYyNTgxZWYwOWM4NzVmMzk0MDEwYmRmMGY2ODU2OTk2M2E0M2IzODBiYjA2/tJLnA==: 00:13:55.725 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:55.725 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:57.622 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:58.556 nvme0n1 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.556 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:59.157 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:59.157 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:59.157 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:59.416 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:59.983 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:59.983 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:59.983 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.241 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.808 request: 00:14:00.808 { 00:14:00.808 "name": "nvme0", 00:14:00.808 "dhchap_key": "key1", 00:14:00.808 "dhchap_ctrlr_key": "key3", 00:14:00.808 "method": "bdev_nvme_set_keys", 00:14:00.808 "req_id": 1 00:14:00.808 } 00:14:00.808 Got JSON-RPC error response 00:14:00.808 response: 00:14:00.808 { 00:14:00.808 "code": -13, 00:14:00.808 "message": "Permission denied" 00:14:00.808 } 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:00.808 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.066 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:01.066 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:02.000 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:02.000 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:02.000 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:02.259 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.636 nvme0n1 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.636 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.894 request: 00:14:03.894 { 00:14:03.894 "name": "nvme0", 00:14:03.894 "dhchap_key": "key2", 00:14:03.894 "dhchap_ctrlr_key": "key0", 00:14:03.894 "method": "bdev_nvme_set_keys", 00:14:03.894 "req_id": 1 00:14:03.894 } 00:14:03.894 Got JSON-RPC error response 00:14:03.894 response: 00:14:03.894 { 00:14:03.894 "code": -13, 00:14:03.894 "message": "Permission denied" 00:14:03.894 } 00:14:03.894 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:04.153 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.411 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:04.411 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:05.346 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:05.346 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:05.346 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67507 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67507 ']' 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67507 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67507 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:05.604 killing process with pid 67507 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67507' 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67507 00:14:05.604 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67507 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.170 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.170 rmmod nvme_tcp 00:14:06.170 rmmod nvme_fabrics 00:14:06.170 rmmod nvme_keyring 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70702 ']' 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70702 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70702 ']' 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70702 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70702 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.170 killing process with pid 70702 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70702' 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70702 00:14:06.170 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70702 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:06.428 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.sar /tmp/spdk.key-sha256.8TY /tmp/spdk.key-sha384.STL /tmp/spdk.key-sha512.UuZ /tmp/spdk.key-sha512.ED2 /tmp/spdk.key-sha384.IoB /tmp/spdk.key-sha256.5zD '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:06.687 00:14:06.687 real 3m25.317s 00:14:06.687 user 8m14.364s 00:14:06.687 sys 0m31.616s 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.687 ************************************ 00:14:06.687 END TEST nvmf_auth_target 00:14:06.687 ************************************ 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.687 ************************************ 00:14:06.687 START TEST nvmf_bdevio_no_huge 00:14:06.687 ************************************ 00:14:06.687 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:06.947 * Looking for test storage... 00:14:06.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.947 --rc genhtml_branch_coverage=1 00:14:06.947 --rc genhtml_function_coverage=1 00:14:06.947 --rc genhtml_legend=1 00:14:06.947 --rc geninfo_all_blocks=1 00:14:06.947 --rc geninfo_unexecuted_blocks=1 00:14:06.947 00:14:06.947 ' 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.947 --rc genhtml_branch_coverage=1 00:14:06.947 --rc genhtml_function_coverage=1 00:14:06.947 --rc genhtml_legend=1 00:14:06.947 --rc geninfo_all_blocks=1 00:14:06.947 --rc geninfo_unexecuted_blocks=1 00:14:06.947 00:14:06.947 ' 00:14:06.947 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.948 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.948 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:06.949 Cannot find device "nvmf_init_br" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:06.949 Cannot find device "nvmf_init_br2" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:06.949 Cannot find device "nvmf_tgt_br" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.949 Cannot find device "nvmf_tgt_br2" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:06.949 Cannot find device "nvmf_init_br" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:06.949 Cannot find device "nvmf_init_br2" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:06.949 Cannot find device "nvmf_tgt_br" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:06.949 Cannot find device "nvmf_tgt_br2" 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:06.949 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:07.207 Cannot find device "nvmf_br" 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:07.208 Cannot find device "nvmf_init_if" 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:07.208 Cannot find device "nvmf_init_if2" 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.208 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.208 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:07.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:14:07.466 00:14:07.466 --- 10.0.0.3 ping statistics --- 00:14:07.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.466 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:07.466 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:07.466 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:14:07.466 00:14:07.466 --- 10.0.0.4 ping statistics --- 00:14:07.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.466 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:07.466 00:14:07.466 --- 10.0.0.1 ping statistics --- 00:14:07.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.466 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:07.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:07.466 00:14:07.466 --- 10.0.0.2 ping statistics --- 00:14:07.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.466 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71345 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71345 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71345 ']' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.466 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.466 [2024-11-20 13:33:19.276915] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:07.466 [2024-11-20 13:33:19.277051] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:07.783 [2024-11-20 13:33:19.445653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.783 [2024-11-20 13:33:19.515918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.783 [2024-11-20 13:33:19.515971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.783 [2024-11-20 13:33:19.515983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.783 [2024-11-20 13:33:19.515991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.783 [2024-11-20 13:33:19.515999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.783 [2024-11-20 13:33:19.517013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:07.783 [2024-11-20 13:33:19.517125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:07.783 [2024-11-20 13:33:19.517486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.783 [2024-11-20 13:33:19.517652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.783 [2024-11-20 13:33:19.522472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 [2024-11-20 13:33:20.386303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 Malloc0 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.719 [2024-11-20 13:33:20.434801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:08.719 { 00:14:08.719 "params": { 00:14:08.719 "name": "Nvme$subsystem", 00:14:08.719 "trtype": "$TEST_TRANSPORT", 00:14:08.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.719 "adrfam": "ipv4", 00:14:08.719 "trsvcid": "$NVMF_PORT", 00:14:08.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.719 "hdgst": ${hdgst:-false}, 00:14:08.719 "ddgst": ${ddgst:-false} 00:14:08.719 }, 00:14:08.719 "method": "bdev_nvme_attach_controller" 00:14:08.719 } 00:14:08.719 EOF 00:14:08.719 )") 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:08.719 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:08.719 "params": { 00:14:08.719 "name": "Nvme1", 00:14:08.719 "trtype": "tcp", 00:14:08.719 "traddr": "10.0.0.3", 00:14:08.719 "adrfam": "ipv4", 00:14:08.719 "trsvcid": "4420", 00:14:08.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.719 "hdgst": false, 00:14:08.719 "ddgst": false 00:14:08.719 }, 00:14:08.719 "method": "bdev_nvme_attach_controller" 00:14:08.719 }' 00:14:08.719 [2024-11-20 13:33:20.498007] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:08.719 [2024-11-20 13:33:20.498104] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71381 ] 00:14:08.719 [2024-11-20 13:33:20.664605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.978 [2024-11-20 13:33:20.750519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.978 [2024-11-20 13:33:20.750609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.978 [2024-11-20 13:33:20.750615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.978 [2024-11-20 13:33:20.765299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.237 I/O targets: 00:14:09.237 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:09.237 00:14:09.237 00:14:09.238 CUnit - A unit testing framework for C - Version 2.1-3 00:14:09.238 http://cunit.sourceforge.net/ 00:14:09.238 00:14:09.238 00:14:09.238 Suite: bdevio tests on: Nvme1n1 00:14:09.238 Test: blockdev write read block ...passed 00:14:09.238 Test: blockdev write zeroes read block ...passed 00:14:09.238 Test: blockdev write zeroes read no split ...passed 00:14:09.238 Test: blockdev write zeroes read split ...passed 00:14:09.238 Test: blockdev write zeroes read split partial ...passed 00:14:09.238 Test: blockdev reset ...[2024-11-20 13:33:21.016833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:09.238 [2024-11-20 13:33:21.017077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1627310 (9): Bad file descriptor 00:14:09.238 [2024-11-20 13:33:21.031218] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:09.238 passed 00:14:09.238 Test: blockdev write read 8 blocks ...passed 00:14:09.238 Test: blockdev write read size > 128k ...passed 00:14:09.238 Test: blockdev write read invalid size ...passed 00:14:09.238 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:09.238 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:09.238 Test: blockdev write read max offset ...passed 00:14:09.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:09.238 Test: blockdev writev readv 8 blocks ...passed 00:14:09.238 Test: blockdev writev readv 30 x 1block ...passed 00:14:09.238 Test: blockdev writev readv block ...passed 00:14:09.238 Test: blockdev writev readv size > 128k ...passed 00:14:09.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:09.238 Test: blockdev comparev and writev ...[2024-11-20 13:33:21.040935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.040996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.041018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.041030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.041504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.041527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.041544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.041555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.041834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.041856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.041874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.042176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.042212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.042231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.238 [2024-11-20 13:33:21.042241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:09.238 passed 00:14:09.238 Test: blockdev nvme passthru rw ...passed 00:14:09.238 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:33:21.043077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.238 [2024-11-20 13:33:21.043102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.043225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.238 [2024-11-20 13:33:21.043246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.043353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.238 [2024-11-20 13:33:21.043369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:09.238 [2024-11-20 13:33:21.043470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.238 [2024-11-20 13:33:21.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:09.238 passed 00:14:09.238 Test: blockdev nvme admin passthru ...passed 00:14:09.238 Test: blockdev copy ...passed 00:14:09.238 00:14:09.238 Run Summary: Type Total Ran Passed Failed Inactive 00:14:09.238 suites 1 1 n/a 0 0 00:14:09.238 tests 23 23 23 0 0 00:14:09.238 asserts 152 152 152 0 n/a 00:14:09.238 00:14:09.238 Elapsed time = 0.177 seconds 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.497 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.755 rmmod nvme_tcp 00:14:09.755 rmmod nvme_fabrics 00:14:09.755 rmmod nvme_keyring 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:09.755 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71345 ']' 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71345 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71345 ']' 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71345 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71345 00:14:09.756 killing process with pid 71345 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71345' 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71345 00:14:09.756 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71345 00:14:10.014 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:10.014 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:10.014 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:10.014 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:10.014 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:10.273 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.273 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:10.532 ************************************ 00:14:10.532 END TEST nvmf_bdevio_no_huge 00:14:10.532 ************************************ 00:14:10.532 00:14:10.532 real 0m3.656s 00:14:10.532 user 0m11.355s 00:14:10.532 sys 0m1.469s 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.532 ************************************ 00:14:10.532 START TEST nvmf_tls 00:14:10.532 ************************************ 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:10.532 * Looking for test storage... 00:14:10.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:10.532 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.533 --rc genhtml_branch_coverage=1 00:14:10.533 --rc genhtml_function_coverage=1 00:14:10.533 --rc genhtml_legend=1 00:14:10.533 --rc geninfo_all_blocks=1 00:14:10.533 --rc geninfo_unexecuted_blocks=1 00:14:10.533 00:14:10.533 ' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.533 --rc genhtml_branch_coverage=1 00:14:10.533 --rc genhtml_function_coverage=1 00:14:10.533 --rc genhtml_legend=1 00:14:10.533 --rc geninfo_all_blocks=1 00:14:10.533 --rc geninfo_unexecuted_blocks=1 00:14:10.533 00:14:10.533 ' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.533 --rc genhtml_branch_coverage=1 00:14:10.533 --rc genhtml_function_coverage=1 00:14:10.533 --rc genhtml_legend=1 00:14:10.533 --rc geninfo_all_blocks=1 00:14:10.533 --rc geninfo_unexecuted_blocks=1 00:14:10.533 00:14:10.533 ' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:10.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.533 --rc genhtml_branch_coverage=1 00:14:10.533 --rc genhtml_function_coverage=1 00:14:10.533 --rc genhtml_legend=1 00:14:10.533 --rc geninfo_all_blocks=1 00:14:10.533 --rc geninfo_unexecuted_blocks=1 00:14:10.533 00:14:10.533 ' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.533 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:10.793 Cannot find device "nvmf_init_br" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:10.793 Cannot find device "nvmf_init_br2" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:10.793 Cannot find device "nvmf_tgt_br" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.793 Cannot find device "nvmf_tgt_br2" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:10.793 Cannot find device "nvmf_init_br" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:10.793 Cannot find device "nvmf_init_br2" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:10.793 Cannot find device "nvmf_tgt_br" 00:14:10.793 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:10.794 Cannot find device "nvmf_tgt_br2" 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:10.794 Cannot find device "nvmf_br" 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:10.794 Cannot find device "nvmf_init_if" 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:10.794 Cannot find device "nvmf_init_if2" 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.794 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:11.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:14:11.053 00:14:11.053 --- 10.0.0.3 ping statistics --- 00:14:11.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.053 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:11.053 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:11.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:11.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:11.053 00:14:11.053 --- 10.0.0.4 ping statistics --- 00:14:11.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.053 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:11.054 00:14:11.054 --- 10.0.0.1 ping statistics --- 00:14:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.054 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:11.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:11.054 00:14:11.054 --- 10.0.0.2 ping statistics --- 00:14:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.054 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71613 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71613 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71613 ']' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.054 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.054 [2024-11-20 13:33:22.979314] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:11.054 [2024-11-20 13:33:22.979405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.313 [2024-11-20 13:33:23.131659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.313 [2024-11-20 13:33:23.203609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.313 [2024-11-20 13:33:23.203677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.313 [2024-11-20 13:33:23.203703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.313 [2024-11-20 13:33:23.203722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.313 [2024-11-20 13:33:23.203731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.313 [2024-11-20 13:33:23.204216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.313 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.313 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:11.313 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.313 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.313 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.572 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.572 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:11.572 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:11.830 true 00:14:11.830 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.830 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:12.088 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:12.088 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:12.088 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:12.370 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:12.370 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:12.629 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:12.629 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:12.629 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:12.888 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:12.888 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:13.147 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:13.148 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:13.148 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:13.148 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.406 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:13.406 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:13.406 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:13.666 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.666 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:13.925 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:13.925 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:13.925 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:14.184 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:14.184 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:14.443 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:14.443 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.5SPbpVEIA7 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.x9gO6RE1Xn 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5SPbpVEIA7 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.x9gO6RE1Xn 00:14:14.701 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:14.961 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:15.528 [2024-11-20 13:33:27.204089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.528 13:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.5SPbpVEIA7 00:14:15.528 13:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5SPbpVEIA7 00:14:15.528 13:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:15.787 [2024-11-20 13:33:27.583881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.787 13:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:16.045 13:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:16.304 [2024-11-20 13:33:28.167941] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:16.304 [2024-11-20 13:33:28.168268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:16.304 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:16.563 malloc0 00:14:16.563 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:16.825 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5SPbpVEIA7 00:14:17.084 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.342 13:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.5SPbpVEIA7 00:14:29.553 Initializing NVMe Controllers 00:14:29.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:29.553 Initialization complete. Launching workers. 00:14:29.553 ======================================================== 00:14:29.553 Latency(us) 00:14:29.553 Device Information : IOPS MiB/s Average min max 00:14:29.553 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9823.58 38.37 6516.47 991.24 7783.65 00:14:29.553 ======================================================== 00:14:29.553 Total : 9823.58 38.37 6516.47 991.24 7783.65 00:14:29.553 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SPbpVEIA7 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5SPbpVEIA7 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71850 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71850 /var/tmp/bdevperf.sock 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71850 ']' 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.553 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.553 [2024-11-20 13:33:39.495411] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:29.553 [2024-11-20 13:33:39.495505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71850 ] 00:14:29.553 [2024-11-20 13:33:39.646102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.553 [2024-11-20 13:33:39.711325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.553 [2024-11-20 13:33:39.768641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.553 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.553 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:29.553 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5SPbpVEIA7 00:14:29.553 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:29.553 [2024-11-20 13:33:40.973397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.553 TLSTESTn1 00:14:29.553 13:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:29.553 Running I/O for 10 seconds... 00:14:31.451 4159.00 IOPS, 16.25 MiB/s [2024-11-20T13:33:44.342Z] 4175.50 IOPS, 16.31 MiB/s [2024-11-20T13:33:45.278Z] 4151.33 IOPS, 16.22 MiB/s [2024-11-20T13:33:46.215Z] 4158.50 IOPS, 16.24 MiB/s [2024-11-20T13:33:47.591Z] 4152.60 IOPS, 16.22 MiB/s [2024-11-20T13:33:48.527Z] 4149.67 IOPS, 16.21 MiB/s [2024-11-20T13:33:49.464Z] 4149.00 IOPS, 16.21 MiB/s [2024-11-20T13:33:50.400Z] 4144.50 IOPS, 16.19 MiB/s [2024-11-20T13:33:51.337Z] 4151.33 IOPS, 16.22 MiB/s [2024-11-20T13:33:51.337Z] 4148.60 IOPS, 16.21 MiB/s 00:14:39.380 Latency(us) 00:14:39.380 [2024-11-20T13:33:51.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.380 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:39.380 Verification LBA range: start 0x0 length 0x2000 00:14:39.380 TLSTESTn1 : 10.02 4154.03 16.23 0.00 0.00 30755.37 6613.18 30742.34 00:14:39.380 [2024-11-20T13:33:51.337Z] =================================================================================================================== 00:14:39.380 [2024-11-20T13:33:51.337Z] Total : 4154.03 16.23 0.00 0.00 30755.37 6613.18 30742.34 00:14:39.380 { 00:14:39.380 "results": [ 00:14:39.380 { 00:14:39.380 "job": "TLSTESTn1", 00:14:39.380 "core_mask": "0x4", 00:14:39.380 "workload": "verify", 00:14:39.380 "status": "finished", 00:14:39.380 "verify_range": { 00:14:39.380 "start": 0, 00:14:39.380 "length": 8192 00:14:39.380 }, 00:14:39.380 "queue_depth": 128, 00:14:39.380 "io_size": 4096, 00:14:39.380 "runtime": 10.016298, 00:14:39.380 "iops": 4154.029762293414, 00:14:39.380 "mibps": 16.22667875895865, 00:14:39.380 "io_failed": 0, 00:14:39.380 "io_timeout": 0, 00:14:39.380 "avg_latency_us": 30755.36774396532, 00:14:39.380 "min_latency_us": 6613.178181818182, 00:14:39.380 "max_latency_us": 30742.34181818182 00:14:39.380 } 00:14:39.380 ], 00:14:39.380 "core_count": 1 00:14:39.380 } 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71850 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71850 ']' 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71850 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71850 00:14:39.380 killing process with pid 71850 00:14:39.380 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.380 00:14:39.380 Latency(us) 00:14:39.380 [2024-11-20T13:33:51.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.380 [2024-11-20T13:33:51.337Z] =================================================================================================================== 00:14:39.380 [2024-11-20T13:33:51.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71850' 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71850 00:14:39.380 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71850 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x9gO6RE1Xn 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x9gO6RE1Xn 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x9gO6RE1Xn 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.x9gO6RE1Xn 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71985 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71985 /var/tmp/bdevperf.sock 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71985 ']' 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.639 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.639 [2024-11-20 13:33:51.519690] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:39.639 [2024-11-20 13:33:51.519822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71985 ] 00:14:39.898 [2024-11-20 13:33:51.673812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.898 [2024-11-20 13:33:51.739804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.898 [2024-11-20 13:33:51.800051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.156 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.156 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:40.156 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.x9gO6RE1Xn 00:14:40.414 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.674 [2024-11-20 13:33:52.447418] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.674 [2024-11-20 13:33:52.452480] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:40.674 [2024-11-20 13:33:52.453125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5fb0 (107): Transport endpoint is not connected 00:14:40.674 [2024-11-20 13:33:52.454120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5fb0 (9): Bad file descriptor 00:14:40.674 [2024-11-20 13:33:52.455119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:40.674 [2024-11-20 13:33:52.455142] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:40.674 [2024-11-20 13:33:52.455164] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:40.674 [2024-11-20 13:33:52.455179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:40.674 request: 00:14:40.674 { 00:14:40.674 "name": "TLSTEST", 00:14:40.674 "trtype": "tcp", 00:14:40.674 "traddr": "10.0.0.3", 00:14:40.674 "adrfam": "ipv4", 00:14:40.674 "trsvcid": "4420", 00:14:40.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.674 "prchk_reftag": false, 00:14:40.674 "prchk_guard": false, 00:14:40.674 "hdgst": false, 00:14:40.674 "ddgst": false, 00:14:40.674 "psk": "key0", 00:14:40.674 "allow_unrecognized_csi": false, 00:14:40.674 "method": "bdev_nvme_attach_controller", 00:14:40.674 "req_id": 1 00:14:40.674 } 00:14:40.674 Got JSON-RPC error response 00:14:40.674 response: 00:14:40.674 { 00:14:40.674 "code": -5, 00:14:40.674 "message": "Input/output error" 00:14:40.674 } 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71985 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71985 ']' 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71985 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71985 00:14:40.674 killing process with pid 71985 00:14:40.674 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.674 00:14:40.674 Latency(us) 00:14:40.674 [2024-11-20T13:33:52.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.674 [2024-11-20T13:33:52.631Z] =================================================================================================================== 00:14:40.674 [2024-11-20T13:33:52.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71985' 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71985 00:14:40.674 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71985 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5SPbpVEIA7 00:14:40.933 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5SPbpVEIA7 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5SPbpVEIA7 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5SPbpVEIA7 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72006 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72006 /var/tmp/bdevperf.sock 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72006 ']' 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.934 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.934 [2024-11-20 13:33:52.785795] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:40.934 [2024-11-20 13:33:52.786218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72006 ] 00:14:41.193 [2024-11-20 13:33:52.932399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.193 [2024-11-20 13:33:52.998380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.193 [2024-11-20 13:33:53.055500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.163 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.163 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:42.163 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5SPbpVEIA7 00:14:42.163 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:42.424 [2024-11-20 13:33:54.291528] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.424 [2024-11-20 13:33:54.302371] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:42.424 [2024-11-20 13:33:54.302410] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:42.424 [2024-11-20 13:33:54.302473] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:42.424 [2024-11-20 13:33:54.303327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d7fb0 (107): Transport endpoint is not connected 00:14:42.425 [2024-11-20 13:33:54.304313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d7fb0 (9): Bad file descriptor 00:14:42.425 [2024-11-20 13:33:54.305324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:42.425 [2024-11-20 13:33:54.305353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:42.425 [2024-11-20 13:33:54.305364] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:42.425 [2024-11-20 13:33:54.305380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:42.425 request: 00:14:42.425 { 00:14:42.425 "name": "TLSTEST", 00:14:42.425 "trtype": "tcp", 00:14:42.425 "traddr": "10.0.0.3", 00:14:42.425 "adrfam": "ipv4", 00:14:42.425 "trsvcid": "4420", 00:14:42.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.425 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:42.425 "prchk_reftag": false, 00:14:42.425 "prchk_guard": false, 00:14:42.425 "hdgst": false, 00:14:42.425 "ddgst": false, 00:14:42.425 "psk": "key0", 00:14:42.425 "allow_unrecognized_csi": false, 00:14:42.425 "method": "bdev_nvme_attach_controller", 00:14:42.425 "req_id": 1 00:14:42.425 } 00:14:42.425 Got JSON-RPC error response 00:14:42.425 response: 00:14:42.425 { 00:14:42.425 "code": -5, 00:14:42.425 "message": "Input/output error" 00:14:42.425 } 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72006 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72006 ']' 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72006 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72006 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:42.425 killing process with pid 72006 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72006' 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72006 00:14:42.425 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.425 00:14:42.425 Latency(us) 00:14:42.425 [2024-11-20T13:33:54.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.425 [2024-11-20T13:33:54.382Z] =================================================================================================================== 00:14:42.425 [2024-11-20T13:33:54.382Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.425 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72006 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SPbpVEIA7 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SPbpVEIA7 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SPbpVEIA7 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5SPbpVEIA7 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72040 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72040 /var/tmp/bdevperf.sock 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72040 ']' 00:14:42.685 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.686 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.686 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.686 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.686 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.686 [2024-11-20 13:33:54.618529] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:42.686 [2024-11-20 13:33:54.618663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72040 ] 00:14:42.945 [2024-11-20 13:33:54.764397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.945 [2024-11-20 13:33:54.825920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.945 [2024-11-20 13:33:54.882314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.203 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.203 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:43.203 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5SPbpVEIA7 00:14:43.461 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.720 [2024-11-20 13:33:55.519145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.720 [2024-11-20 13:33:55.524142] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:43.720 [2024-11-20 13:33:55.524183] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:43.720 [2024-11-20 13:33:55.524264] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:43.720 [2024-11-20 13:33:55.524885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4fb0 (107): Transport endpoint is not connected 00:14:43.720 [2024-11-20 13:33:55.525871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4fb0 (9): Bad file descriptor 00:14:43.720 [2024-11-20 13:33:55.526867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:43.720 [2024-11-20 13:33:55.527026] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:43.720 [2024-11-20 13:33:55.527044] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:43.720 [2024-11-20 13:33:55.527063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:43.720 request: 00:14:43.720 { 00:14:43.720 "name": "TLSTEST", 00:14:43.720 "trtype": "tcp", 00:14:43.720 "traddr": "10.0.0.3", 00:14:43.720 "adrfam": "ipv4", 00:14:43.720 "trsvcid": "4420", 00:14:43.720 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:43.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.720 "prchk_reftag": false, 00:14:43.720 "prchk_guard": false, 00:14:43.720 "hdgst": false, 00:14:43.720 "ddgst": false, 00:14:43.720 "psk": "key0", 00:14:43.720 "allow_unrecognized_csi": false, 00:14:43.720 "method": "bdev_nvme_attach_controller", 00:14:43.720 "req_id": 1 00:14:43.720 } 00:14:43.720 Got JSON-RPC error response 00:14:43.720 response: 00:14:43.720 { 00:14:43.720 "code": -5, 00:14:43.720 "message": "Input/output error" 00:14:43.720 } 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72040 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72040 ']' 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72040 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72040 00:14:43.720 killing process with pid 72040 00:14:43.720 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.720 00:14:43.720 Latency(us) 00:14:43.720 [2024-11-20T13:33:55.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.720 [2024-11-20T13:33:55.677Z] =================================================================================================================== 00:14:43.720 [2024-11-20T13:33:55.677Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72040' 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72040 00:14:43.720 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72040 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72061 00:14:43.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72061 /var/tmp/bdevperf.sock 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72061 ']' 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.981 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.981 [2024-11-20 13:33:55.851333] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:43.981 [2024-11-20 13:33:55.851656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ] 00:14:44.239 [2024-11-20 13:33:56.000672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.239 [2024-11-20 13:33:56.062812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.239 [2024-11-20 13:33:56.118866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.173 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.173 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.173 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:45.431 [2024-11-20 13:33:57.252073] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:45.431 [2024-11-20 13:33:57.252354] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:45.431 request: 00:14:45.431 { 00:14:45.431 "name": "key0", 00:14:45.431 "path": "", 00:14:45.431 "method": "keyring_file_add_key", 00:14:45.431 "req_id": 1 00:14:45.431 } 00:14:45.431 Got JSON-RPC error response 00:14:45.431 response: 00:14:45.431 { 00:14:45.431 "code": -1, 00:14:45.431 "message": "Operation not permitted" 00:14:45.431 } 00:14:45.431 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:45.690 [2024-11-20 13:33:57.520290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.690 [2024-11-20 13:33:57.520563] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:45.690 request: 00:14:45.690 { 00:14:45.690 "name": "TLSTEST", 00:14:45.690 "trtype": "tcp", 00:14:45.690 "traddr": "10.0.0.3", 00:14:45.690 "adrfam": "ipv4", 00:14:45.690 "trsvcid": "4420", 00:14:45.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.690 "prchk_reftag": false, 00:14:45.690 "prchk_guard": false, 00:14:45.690 "hdgst": false, 00:14:45.690 "ddgst": false, 00:14:45.690 "psk": "key0", 00:14:45.690 "allow_unrecognized_csi": false, 00:14:45.690 "method": "bdev_nvme_attach_controller", 00:14:45.690 "req_id": 1 00:14:45.690 } 00:14:45.690 Got JSON-RPC error response 00:14:45.690 response: 00:14:45.690 { 00:14:45.690 "code": -126, 00:14:45.690 "message": "Required key not available" 00:14:45.690 } 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72061 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72061 ']' 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72061 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72061 00:14:45.690 killing process with pid 72061 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72061' 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72061 00:14:45.690 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.690 00:14:45.690 Latency(us) 00:14:45.690 [2024-11-20T13:33:57.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.690 [2024-11-20T13:33:57.647Z] =================================================================================================================== 00:14:45.690 [2024-11-20T13:33:57.647Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.690 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72061 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71613 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71613 ']' 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71613 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71613 00:14:45.949 killing process with pid 71613 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71613' 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71613 00:14:45.949 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71613 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Dy3gAuYSbe 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Dy3gAuYSbe 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72111 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72111 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72111 ']' 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.208 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.208 [2024-11-20 13:33:58.155375] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:46.208 [2024-11-20 13:33:58.155694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.467 [2024-11-20 13:33:58.315261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.467 [2024-11-20 13:33:58.383037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.467 [2024-11-20 13:33:58.383364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.467 [2024-11-20 13:33:58.383510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.467 [2024-11-20 13:33:58.383639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.467 [2024-11-20 13:33:58.383680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.467 [2024-11-20 13:33:58.384160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.726 [2024-11-20 13:33:58.440302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dy3gAuYSbe 00:14:46.726 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.984 [2024-11-20 13:33:58.843298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.984 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:47.242 13:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:47.499 [2024-11-20 13:33:59.399387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.500 [2024-11-20 13:33:59.399637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:47.500 13:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:47.758 malloc0 00:14:47.758 13:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.016 13:33:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:14:48.275 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dy3gAuYSbe 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dy3gAuYSbe 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72159 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72159 /var/tmp/bdevperf.sock 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72159 ']' 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.532 13:34:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.790 [2024-11-20 13:34:00.515860] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:48.790 [2024-11-20 13:34:00.516146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72159 ] 00:14:48.790 [2024-11-20 13:34:00.665565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.790 [2024-11-20 13:34:00.739886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.048 [2024-11-20 13:34:00.801979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.614 13:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.614 13:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.614 13:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:14:50.179 13:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:50.437 [2024-11-20 13:34:02.135549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.437 TLSTESTn1 00:14:50.437 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:50.437 Running I/O for 10 seconds... 00:14:52.750 3915.00 IOPS, 15.29 MiB/s [2024-11-20T13:34:05.643Z] 3983.00 IOPS, 15.56 MiB/s [2024-11-20T13:34:06.577Z] 4003.67 IOPS, 15.64 MiB/s [2024-11-20T13:34:07.512Z] 4014.00 IOPS, 15.68 MiB/s [2024-11-20T13:34:08.447Z] 4029.00 IOPS, 15.74 MiB/s [2024-11-20T13:34:09.393Z] 4010.67 IOPS, 15.67 MiB/s [2024-11-20T13:34:10.330Z] 4019.00 IOPS, 15.70 MiB/s [2024-11-20T13:34:11.705Z] 4013.25 IOPS, 15.68 MiB/s [2024-11-20T13:34:12.641Z] 4017.33 IOPS, 15.69 MiB/s [2024-11-20T13:34:12.641Z] 4000.70 IOPS, 15.63 MiB/s 00:15:00.684 Latency(us) 00:15:00.684 [2024-11-20T13:34:12.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.684 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:00.684 Verification LBA range: start 0x0 length 0x2000 00:15:00.684 TLSTESTn1 : 10.02 4006.19 15.65 0.00 0.00 31889.86 6255.71 24427.05 00:15:00.684 [2024-11-20T13:34:12.641Z] =================================================================================================================== 00:15:00.684 [2024-11-20T13:34:12.641Z] Total : 4006.19 15.65 0.00 0.00 31889.86 6255.71 24427.05 00:15:00.684 { 00:15:00.684 "results": [ 00:15:00.684 { 00:15:00.684 "job": "TLSTESTn1", 00:15:00.684 "core_mask": "0x4", 00:15:00.684 "workload": "verify", 00:15:00.684 "status": "finished", 00:15:00.684 "verify_range": { 00:15:00.684 "start": 0, 00:15:00.684 "length": 8192 00:15:00.684 }, 00:15:00.684 "queue_depth": 128, 00:15:00.684 "io_size": 4096, 00:15:00.684 "runtime": 10.017502, 00:15:00.684 "iops": 4006.1883691163725, 00:15:00.684 "mibps": 15.64917331686083, 00:15:00.684 "io_failed": 0, 00:15:00.684 "io_timeout": 0, 00:15:00.684 "avg_latency_us": 31889.85720794107, 00:15:00.684 "min_latency_us": 6255.709090909091, 00:15:00.684 "max_latency_us": 24427.054545454546 00:15:00.684 } 00:15:00.684 ], 00:15:00.684 "core_count": 1 00:15:00.684 } 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72159 ']' 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:00.684 killing process with pid 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72159' 00:15:00.684 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.684 00:15:00.684 Latency(us) 00:15:00.684 [2024-11-20T13:34:12.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.684 [2024-11-20T13:34:12.641Z] =================================================================================================================== 00:15:00.684 [2024-11-20T13:34:12.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72159 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Dy3gAuYSbe 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dy3gAuYSbe 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dy3gAuYSbe 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:00.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dy3gAuYSbe 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Dy3gAuYSbe 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72301 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72301 /var/tmp/bdevperf.sock 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.684 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:15:00.685 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.685 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.685 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.685 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.685 13:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.943 [2024-11-20 13:34:12.681724] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:00.943 [2024-11-20 13:34:12.682088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72301 ] 00:15:00.943 [2024-11-20 13:34:12.832339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.943 [2024-11-20 13:34:12.897575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.201 [2024-11-20 13:34:12.953409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.201 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.201 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:01.201 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:01.459 [2024-11-20 13:34:13.285063] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Dy3gAuYSbe': 0100666 00:15:01.459 [2024-11-20 13:34:13.285290] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:01.459 request: 00:15:01.459 { 00:15:01.459 "name": "key0", 00:15:01.459 "path": "/tmp/tmp.Dy3gAuYSbe", 00:15:01.459 "method": "keyring_file_add_key", 00:15:01.459 "req_id": 1 00:15:01.459 } 00:15:01.459 Got JSON-RPC error response 00:15:01.459 response: 00:15:01.459 { 00:15:01.459 "code": -1, 00:15:01.459 "message": "Operation not permitted" 00:15:01.459 } 00:15:01.459 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:01.718 [2024-11-20 13:34:13.581274] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.718 [2024-11-20 13:34:13.581511] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:01.718 request: 00:15:01.718 { 00:15:01.718 "name": "TLSTEST", 00:15:01.718 "trtype": "tcp", 00:15:01.718 "traddr": "10.0.0.3", 00:15:01.718 "adrfam": "ipv4", 00:15:01.718 "trsvcid": "4420", 00:15:01.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.718 "prchk_reftag": false, 00:15:01.718 "prchk_guard": false, 00:15:01.718 "hdgst": false, 00:15:01.718 "ddgst": false, 00:15:01.718 "psk": "key0", 00:15:01.718 "allow_unrecognized_csi": false, 00:15:01.718 "method": "bdev_nvme_attach_controller", 00:15:01.718 "req_id": 1 00:15:01.718 } 00:15:01.718 Got JSON-RPC error response 00:15:01.718 response: 00:15:01.718 { 00:15:01.718 "code": -126, 00:15:01.718 "message": "Required key not available" 00:15:01.718 } 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72301 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72301 ']' 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72301 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72301 00:15:01.718 killing process with pid 72301 00:15:01.718 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.718 00:15:01.718 Latency(us) 00:15:01.718 [2024-11-20T13:34:13.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.718 [2024-11-20T13:34:13.675Z] =================================================================================================================== 00:15:01.718 [2024-11-20T13:34:13.675Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72301' 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72301 00:15:01.718 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72301 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72111 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72111 ']' 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72111 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72111 00:15:01.977 killing process with pid 72111 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72111' 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72111 00:15:01.977 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72111 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72327 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72327 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72327 ']' 00:15:02.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.238 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.238 [2024-11-20 13:34:14.126563] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:02.238 [2024-11-20 13:34:14.126663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.498 [2024-11-20 13:34:14.269123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.498 [2024-11-20 13:34:14.327454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.498 [2024-11-20 13:34:14.327505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.498 [2024-11-20 13:34:14.327517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.498 [2024-11-20 13:34:14.327526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.498 [2024-11-20 13:34:14.327533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.498 [2024-11-20 13:34:14.327960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.498 [2024-11-20 13:34:14.382337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.498 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.498 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:02.498 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.498 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.498 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dy3gAuYSbe 00:15:02.756 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:03.014 [2024-11-20 13:34:14.808404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.014 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:03.273 13:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:03.531 [2024-11-20 13:34:15.344540] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.531 [2024-11-20 13:34:15.344792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.531 13:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:03.789 malloc0 00:15:03.789 13:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.048 13:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:04.306 [2024-11-20 13:34:16.235671] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Dy3gAuYSbe': 0100666 00:15:04.306 [2024-11-20 13:34:16.235720] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:04.306 request: 00:15:04.306 { 00:15:04.306 "name": "key0", 00:15:04.306 "path": "/tmp/tmp.Dy3gAuYSbe", 00:15:04.306 "method": "keyring_file_add_key", 00:15:04.306 "req_id": 1 00:15:04.306 } 00:15:04.306 Got JSON-RPC error response 00:15:04.306 response: 00:15:04.306 { 00:15:04.306 "code": -1, 00:15:04.306 "message": "Operation not permitted" 00:15:04.306 } 00:15:04.306 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.564 [2024-11-20 13:34:16.491793] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:04.564 [2024-11-20 13:34:16.491881] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:04.564 request: 00:15:04.564 { 00:15:04.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.564 "host": "nqn.2016-06.io.spdk:host1", 00:15:04.564 "psk": "key0", 00:15:04.564 "method": "nvmf_subsystem_add_host", 00:15:04.564 "req_id": 1 00:15:04.564 } 00:15:04.564 Got JSON-RPC error response 00:15:04.564 response: 00:15:04.564 { 00:15:04.564 "code": -32603, 00:15:04.564 "message": "Internal error" 00:15:04.565 } 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72327 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72327 ']' 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72327 00:15:04.565 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72327 00:15:04.824 killing process with pid 72327 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72327' 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72327 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72327 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Dy3gAuYSbe 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72389 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72389 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72389 ']' 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.824 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.083 [2024-11-20 13:34:16.825786] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:05.083 [2024-11-20 13:34:16.825885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.083 [2024-11-20 13:34:16.971678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.083 [2024-11-20 13:34:17.032330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.083 [2024-11-20 13:34:17.032398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.083 [2024-11-20 13:34:17.032426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.083 [2024-11-20 13:34:17.032435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.083 [2024-11-20 13:34:17.032442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.083 [2024-11-20 13:34:17.032858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.341 [2024-11-20 13:34:17.090884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.908 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.908 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:05.908 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.908 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.908 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.167 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.167 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:15:06.167 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dy3gAuYSbe 00:15:06.167 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:06.426 [2024-11-20 13:34:18.130007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.426 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:06.684 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:06.943 [2024-11-20 13:34:18.770212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.943 [2024-11-20 13:34:18.770487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:06.943 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:07.202 malloc0 00:15:07.202 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:07.460 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:07.718 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72451 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72451 /var/tmp/bdevperf.sock 00:15:07.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72451 ']' 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.977 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.236 [2024-11-20 13:34:19.947285] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:08.236 [2024-11-20 13:34:19.947658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72451 ] 00:15:08.236 [2024-11-20 13:34:20.098053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.236 [2024-11-20 13:34:20.169531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.495 [2024-11-20 13:34:20.231296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.062 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.062 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:09.062 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:09.320 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:09.624 [2024-11-20 13:34:21.416654] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.624 TLSTESTn1 00:15:09.624 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:10.229 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:10.229 "subsystems": [ 00:15:10.229 { 00:15:10.229 "subsystem": "keyring", 00:15:10.229 "config": [ 00:15:10.229 { 00:15:10.229 "method": "keyring_file_add_key", 00:15:10.229 "params": { 00:15:10.229 "name": "key0", 00:15:10.229 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:10.229 } 00:15:10.229 } 00:15:10.229 ] 00:15:10.229 }, 00:15:10.230 { 00:15:10.230 "subsystem": "iobuf", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "iobuf_set_options", 00:15:10.230 "params": { 00:15:10.230 "small_pool_count": 8192, 00:15:10.230 "large_pool_count": 1024, 00:15:10.230 "small_bufsize": 8192, 00:15:10.230 "large_bufsize": 135168, 00:15:10.230 "enable_numa": false 00:15:10.230 } 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "sock", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "sock_set_default_impl", 00:15:10.230 "params": { 00:15:10.230 "impl_name": "uring" 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "sock_impl_set_options", 00:15:10.230 "params": { 00:15:10.230 "impl_name": "ssl", 00:15:10.230 "recv_buf_size": 4096, 00:15:10.230 "send_buf_size": 4096, 00:15:10.230 "enable_recv_pipe": true, 00:15:10.230 "enable_quickack": false, 00:15:10.230 "enable_placement_id": 0, 00:15:10.230 "enable_zerocopy_send_server": true, 00:15:10.230 "enable_zerocopy_send_client": false, 00:15:10.230 "zerocopy_threshold": 0, 00:15:10.230 "tls_version": 0, 00:15:10.230 "enable_ktls": false 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "sock_impl_set_options", 00:15:10.230 "params": { 00:15:10.230 "impl_name": "posix", 00:15:10.230 "recv_buf_size": 2097152, 00:15:10.230 "send_buf_size": 2097152, 00:15:10.230 "enable_recv_pipe": true, 00:15:10.230 "enable_quickack": false, 00:15:10.230 "enable_placement_id": 0, 00:15:10.230 "enable_zerocopy_send_server": true, 00:15:10.230 "enable_zerocopy_send_client": false, 00:15:10.230 "zerocopy_threshold": 0, 00:15:10.230 "tls_version": 0, 00:15:10.230 "enable_ktls": false 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "sock_impl_set_options", 00:15:10.230 "params": { 00:15:10.230 "impl_name": "uring", 00:15:10.230 "recv_buf_size": 2097152, 00:15:10.230 "send_buf_size": 2097152, 00:15:10.230 "enable_recv_pipe": true, 00:15:10.230 "enable_quickack": false, 00:15:10.230 "enable_placement_id": 0, 00:15:10.230 "enable_zerocopy_send_server": false, 00:15:10.230 "enable_zerocopy_send_client": false, 00:15:10.230 "zerocopy_threshold": 0, 00:15:10.230 "tls_version": 0, 00:15:10.230 "enable_ktls": false 00:15:10.230 } 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "vmd", 00:15:10.230 "config": [] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "accel", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "accel_set_options", 00:15:10.230 "params": { 00:15:10.230 "small_cache_size": 128, 00:15:10.230 "large_cache_size": 16, 00:15:10.230 "task_count": 2048, 00:15:10.230 "sequence_count": 2048, 00:15:10.230 "buf_count": 2048 00:15:10.230 } 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "bdev", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "bdev_set_options", 00:15:10.230 "params": { 00:15:10.230 "bdev_io_pool_size": 65535, 00:15:10.230 "bdev_io_cache_size": 256, 00:15:10.230 "bdev_auto_examine": true, 00:15:10.230 "iobuf_small_cache_size": 128, 00:15:10.230 "iobuf_large_cache_size": 16 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_raid_set_options", 00:15:10.230 "params": { 00:15:10.230 "process_window_size_kb": 1024, 00:15:10.230 "process_max_bandwidth_mb_sec": 0 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_iscsi_set_options", 00:15:10.230 "params": { 00:15:10.230 "timeout_sec": 30 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_nvme_set_options", 00:15:10.230 "params": { 00:15:10.230 "action_on_timeout": "none", 00:15:10.230 "timeout_us": 0, 00:15:10.230 "timeout_admin_us": 0, 00:15:10.230 "keep_alive_timeout_ms": 10000, 00:15:10.230 "arbitration_burst": 0, 00:15:10.230 "low_priority_weight": 0, 00:15:10.230 "medium_priority_weight": 0, 00:15:10.230 "high_priority_weight": 0, 00:15:10.230 "nvme_adminq_poll_period_us": 10000, 00:15:10.230 "nvme_ioq_poll_period_us": 0, 00:15:10.230 "io_queue_requests": 0, 00:15:10.230 "delay_cmd_submit": true, 00:15:10.230 "transport_retry_count": 4, 00:15:10.230 "bdev_retry_count": 3, 00:15:10.230 "transport_ack_timeout": 0, 00:15:10.230 "ctrlr_loss_timeout_sec": 0, 00:15:10.230 "reconnect_delay_sec": 0, 00:15:10.230 "fast_io_fail_timeout_sec": 0, 00:15:10.230 "disable_auto_failback": false, 00:15:10.230 "generate_uuids": false, 00:15:10.230 "transport_tos": 0, 00:15:10.230 "nvme_error_stat": false, 00:15:10.230 "rdma_srq_size": 0, 00:15:10.230 "io_path_stat": false, 00:15:10.230 "allow_accel_sequence": false, 00:15:10.230 "rdma_max_cq_size": 0, 00:15:10.230 "rdma_cm_event_timeout_ms": 0, 00:15:10.230 "dhchap_digests": [ 00:15:10.230 "sha256", 00:15:10.230 "sha384", 00:15:10.230 "sha512" 00:15:10.230 ], 00:15:10.230 "dhchap_dhgroups": [ 00:15:10.230 "null", 00:15:10.230 "ffdhe2048", 00:15:10.230 "ffdhe3072", 00:15:10.230 "ffdhe4096", 00:15:10.230 "ffdhe6144", 00:15:10.230 "ffdhe8192" 00:15:10.230 ] 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_nvme_set_hotplug", 00:15:10.230 "params": { 00:15:10.230 "period_us": 100000, 00:15:10.230 "enable": false 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_malloc_create", 00:15:10.230 "params": { 00:15:10.230 "name": "malloc0", 00:15:10.230 "num_blocks": 8192, 00:15:10.230 "block_size": 4096, 00:15:10.230 "physical_block_size": 4096, 00:15:10.230 "uuid": "53b3edaa-7599-4be6-b599-b6aa549db0ca", 00:15:10.230 "optimal_io_boundary": 0, 00:15:10.230 "md_size": 0, 00:15:10.230 "dif_type": 0, 00:15:10.230 "dif_is_head_of_md": false, 00:15:10.230 "dif_pi_format": 0 00:15:10.230 } 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "method": "bdev_wait_for_examine" 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "nbd", 00:15:10.230 "config": [] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "scheduler", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "framework_set_scheduler", 00:15:10.230 "params": { 00:15:10.230 "name": "static" 00:15:10.230 } 00:15:10.230 } 00:15:10.230 ] 00:15:10.230 }, 00:15:10.230 { 00:15:10.230 "subsystem": "nvmf", 00:15:10.230 "config": [ 00:15:10.230 { 00:15:10.230 "method": "nvmf_set_config", 00:15:10.230 "params": { 00:15:10.230 "discovery_filter": "match_any", 00:15:10.231 "admin_cmd_passthru": { 00:15:10.231 "identify_ctrlr": false 00:15:10.231 }, 00:15:10.231 "dhchap_digests": [ 00:15:10.231 "sha256", 00:15:10.231 "sha384", 00:15:10.231 "sha512" 00:15:10.231 ], 00:15:10.231 "dhchap_dhgroups": [ 00:15:10.231 "null", 00:15:10.231 "ffdhe2048", 00:15:10.231 "ffdhe3072", 00:15:10.231 "ffdhe4096", 00:15:10.231 "ffdhe6144", 00:15:10.231 "ffdhe8192" 00:15:10.231 ] 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_set_max_subsystems", 00:15:10.231 "params": { 00:15:10.231 "max_subsystems": 1024 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_set_crdt", 00:15:10.231 "params": { 00:15:10.231 "crdt1": 0, 00:15:10.231 "crdt2": 0, 00:15:10.231 "crdt3": 0 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_create_transport", 00:15:10.231 "params": { 00:15:10.231 "trtype": "TCP", 00:15:10.231 "max_queue_depth": 128, 00:15:10.231 "max_io_qpairs_per_ctrlr": 127, 00:15:10.231 "in_capsule_data_size": 4096, 00:15:10.231 "max_io_size": 131072, 00:15:10.231 "io_unit_size": 131072, 00:15:10.231 "max_aq_depth": 128, 00:15:10.231 "num_shared_buffers": 511, 00:15:10.231 "buf_cache_size": 4294967295, 00:15:10.231 "dif_insert_or_strip": false, 00:15:10.231 "zcopy": false, 00:15:10.231 "c2h_success": false, 00:15:10.231 "sock_priority": 0, 00:15:10.231 "abort_timeout_sec": 1, 00:15:10.231 "ack_timeout": 0, 00:15:10.231 "data_wr_pool_size": 0 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_create_subsystem", 00:15:10.231 "params": { 00:15:10.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.231 "allow_any_host": false, 00:15:10.231 "serial_number": "SPDK00000000000001", 00:15:10.231 "model_number": "SPDK bdev Controller", 00:15:10.231 "max_namespaces": 10, 00:15:10.231 "min_cntlid": 1, 00:15:10.231 "max_cntlid": 65519, 00:15:10.231 "ana_reporting": false 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_subsystem_add_host", 00:15:10.231 "params": { 00:15:10.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.231 "host": "nqn.2016-06.io.spdk:host1", 00:15:10.231 "psk": "key0" 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_subsystem_add_ns", 00:15:10.231 "params": { 00:15:10.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.231 "namespace": { 00:15:10.231 "nsid": 1, 00:15:10.231 "bdev_name": "malloc0", 00:15:10.231 "nguid": "53B3EDAA75994BE6B599B6AA549DB0CA", 00:15:10.231 "uuid": "53b3edaa-7599-4be6-b599-b6aa549db0ca", 00:15:10.231 "no_auto_visible": false 00:15:10.231 } 00:15:10.231 } 00:15:10.231 }, 00:15:10.231 { 00:15:10.231 "method": "nvmf_subsystem_add_listener", 00:15:10.231 "params": { 00:15:10.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.231 "listen_address": { 00:15:10.231 "trtype": "TCP", 00:15:10.231 "adrfam": "IPv4", 00:15:10.231 "traddr": "10.0.0.3", 00:15:10.231 "trsvcid": "4420" 00:15:10.231 }, 00:15:10.231 "secure_channel": true 00:15:10.231 } 00:15:10.231 } 00:15:10.231 ] 00:15:10.231 } 00:15:10.231 ] 00:15:10.231 }' 00:15:10.231 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:10.490 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:10.490 "subsystems": [ 00:15:10.490 { 00:15:10.490 "subsystem": "keyring", 00:15:10.490 "config": [ 00:15:10.490 { 00:15:10.490 "method": "keyring_file_add_key", 00:15:10.490 "params": { 00:15:10.490 "name": "key0", 00:15:10.490 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:10.490 } 00:15:10.490 } 00:15:10.490 ] 00:15:10.490 }, 00:15:10.490 { 00:15:10.490 "subsystem": "iobuf", 00:15:10.490 "config": [ 00:15:10.490 { 00:15:10.490 "method": "iobuf_set_options", 00:15:10.490 "params": { 00:15:10.490 "small_pool_count": 8192, 00:15:10.490 "large_pool_count": 1024, 00:15:10.490 "small_bufsize": 8192, 00:15:10.490 "large_bufsize": 135168, 00:15:10.490 "enable_numa": false 00:15:10.490 } 00:15:10.490 } 00:15:10.490 ] 00:15:10.490 }, 00:15:10.490 { 00:15:10.490 "subsystem": "sock", 00:15:10.490 "config": [ 00:15:10.490 { 00:15:10.490 "method": "sock_set_default_impl", 00:15:10.490 "params": { 00:15:10.490 "impl_name": "uring" 00:15:10.490 } 00:15:10.490 }, 00:15:10.490 { 00:15:10.490 "method": "sock_impl_set_options", 00:15:10.490 "params": { 00:15:10.490 "impl_name": "ssl", 00:15:10.490 "recv_buf_size": 4096, 00:15:10.490 "send_buf_size": 4096, 00:15:10.490 "enable_recv_pipe": true, 00:15:10.490 "enable_quickack": false, 00:15:10.490 "enable_placement_id": 0, 00:15:10.490 "enable_zerocopy_send_server": true, 00:15:10.490 "enable_zerocopy_send_client": false, 00:15:10.490 "zerocopy_threshold": 0, 00:15:10.490 "tls_version": 0, 00:15:10.490 "enable_ktls": false 00:15:10.490 } 00:15:10.490 }, 00:15:10.490 { 00:15:10.490 "method": "sock_impl_set_options", 00:15:10.490 "params": { 00:15:10.490 "impl_name": "posix", 00:15:10.490 "recv_buf_size": 2097152, 00:15:10.490 "send_buf_size": 2097152, 00:15:10.490 "enable_recv_pipe": true, 00:15:10.490 "enable_quickack": false, 00:15:10.490 "enable_placement_id": 0, 00:15:10.490 "enable_zerocopy_send_server": true, 00:15:10.490 "enable_zerocopy_send_client": false, 00:15:10.490 "zerocopy_threshold": 0, 00:15:10.490 "tls_version": 0, 00:15:10.490 "enable_ktls": false 00:15:10.490 } 00:15:10.490 }, 00:15:10.490 { 00:15:10.490 "method": "sock_impl_set_options", 00:15:10.491 "params": { 00:15:10.491 "impl_name": "uring", 00:15:10.491 "recv_buf_size": 2097152, 00:15:10.491 "send_buf_size": 2097152, 00:15:10.491 "enable_recv_pipe": true, 00:15:10.491 "enable_quickack": false, 00:15:10.491 "enable_placement_id": 0, 00:15:10.491 "enable_zerocopy_send_server": false, 00:15:10.491 "enable_zerocopy_send_client": false, 00:15:10.491 "zerocopy_threshold": 0, 00:15:10.491 "tls_version": 0, 00:15:10.491 "enable_ktls": false 00:15:10.491 } 00:15:10.491 } 00:15:10.491 ] 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "subsystem": "vmd", 00:15:10.491 "config": [] 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "subsystem": "accel", 00:15:10.491 "config": [ 00:15:10.491 { 00:15:10.491 "method": "accel_set_options", 00:15:10.491 "params": { 00:15:10.491 "small_cache_size": 128, 00:15:10.491 "large_cache_size": 16, 00:15:10.491 "task_count": 2048, 00:15:10.491 "sequence_count": 2048, 00:15:10.491 "buf_count": 2048 00:15:10.491 } 00:15:10.491 } 00:15:10.491 ] 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "subsystem": "bdev", 00:15:10.491 "config": [ 00:15:10.491 { 00:15:10.491 "method": "bdev_set_options", 00:15:10.491 "params": { 00:15:10.491 "bdev_io_pool_size": 65535, 00:15:10.491 "bdev_io_cache_size": 256, 00:15:10.491 "bdev_auto_examine": true, 00:15:10.491 "iobuf_small_cache_size": 128, 00:15:10.491 "iobuf_large_cache_size": 16 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_raid_set_options", 00:15:10.491 "params": { 00:15:10.491 "process_window_size_kb": 1024, 00:15:10.491 "process_max_bandwidth_mb_sec": 0 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_iscsi_set_options", 00:15:10.491 "params": { 00:15:10.491 "timeout_sec": 30 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_nvme_set_options", 00:15:10.491 "params": { 00:15:10.491 "action_on_timeout": "none", 00:15:10.491 "timeout_us": 0, 00:15:10.491 "timeout_admin_us": 0, 00:15:10.491 "keep_alive_timeout_ms": 10000, 00:15:10.491 "arbitration_burst": 0, 00:15:10.491 "low_priority_weight": 0, 00:15:10.491 "medium_priority_weight": 0, 00:15:10.491 "high_priority_weight": 0, 00:15:10.491 "nvme_adminq_poll_period_us": 10000, 00:15:10.491 "nvme_ioq_poll_period_us": 0, 00:15:10.491 "io_queue_requests": 512, 00:15:10.491 "delay_cmd_submit": true, 00:15:10.491 "transport_retry_count": 4, 00:15:10.491 "bdev_retry_count": 3, 00:15:10.491 "transport_ack_timeout": 0, 00:15:10.491 "ctrlr_loss_timeout_sec": 0, 00:15:10.491 "reconnect_delay_sec": 0, 00:15:10.491 "fast_io_fail_timeout_sec": 0, 00:15:10.491 "disable_auto_failback": false, 00:15:10.491 "generate_uuids": false, 00:15:10.491 "transport_tos": 0, 00:15:10.491 "nvme_error_stat": false, 00:15:10.491 "rdma_srq_size": 0, 00:15:10.491 "io_path_stat": false, 00:15:10.491 "allow_accel_sequence": false, 00:15:10.491 "rdma_max_cq_size": 0, 00:15:10.491 "rdma_cm_event_timeout_ms": 0, 00:15:10.491 "dhchap_digests": [ 00:15:10.491 "sha256", 00:15:10.491 "sha384", 00:15:10.491 "sha512" 00:15:10.491 ], 00:15:10.491 "dhchap_dhgroups": [ 00:15:10.491 "null", 00:15:10.491 "ffdhe2048", 00:15:10.491 "ffdhe3072", 00:15:10.491 "ffdhe4096", 00:15:10.491 "ffdhe6144", 00:15:10.491 "ffdhe8192" 00:15:10.491 ] 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_nvme_attach_controller", 00:15:10.491 "params": { 00:15:10.491 "name": "TLSTEST", 00:15:10.491 "trtype": "TCP", 00:15:10.491 "adrfam": "IPv4", 00:15:10.491 "traddr": "10.0.0.3", 00:15:10.491 "trsvcid": "4420", 00:15:10.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.491 "prchk_reftag": false, 00:15:10.491 "prchk_guard": false, 00:15:10.491 "ctrlr_loss_timeout_sec": 0, 00:15:10.491 "reconnect_delay_sec": 0, 00:15:10.491 "fast_io_fail_timeout_sec": 0, 00:15:10.491 "psk": "key0", 00:15:10.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.491 "hdgst": false, 00:15:10.491 "ddgst": false, 00:15:10.491 "multipath": "multipath" 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_nvme_set_hotplug", 00:15:10.491 "params": { 00:15:10.491 "period_us": 100000, 00:15:10.491 "enable": false 00:15:10.491 } 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "method": "bdev_wait_for_examine" 00:15:10.491 } 00:15:10.491 ] 00:15:10.491 }, 00:15:10.491 { 00:15:10.491 "subsystem": "nbd", 00:15:10.491 "config": [] 00:15:10.491 } 00:15:10.491 ] 00:15:10.491 }' 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72451 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72451 ']' 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72451 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72451 00:15:10.491 killing process with pid 72451 00:15:10.491 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.491 00:15:10.491 Latency(us) 00:15:10.491 [2024-11-20T13:34:22.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.491 [2024-11-20T13:34:22.448Z] =================================================================================================================== 00:15:10.491 [2024-11-20T13:34:22.448Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72451' 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72451 00:15:10.491 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72451 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72389 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72389 ']' 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72389 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72389 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:10.750 killing process with pid 72389 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72389' 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72389 00:15:10.750 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72389 00:15:11.009 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:11.009 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.009 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.009 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:11.009 "subsystems": [ 00:15:11.009 { 00:15:11.009 "subsystem": "keyring", 00:15:11.009 "config": [ 00:15:11.009 { 00:15:11.009 "method": "keyring_file_add_key", 00:15:11.009 "params": { 00:15:11.009 "name": "key0", 00:15:11.009 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:11.009 } 00:15:11.009 } 00:15:11.009 ] 00:15:11.009 }, 00:15:11.009 { 00:15:11.009 "subsystem": "iobuf", 00:15:11.009 "config": [ 00:15:11.009 { 00:15:11.009 "method": "iobuf_set_options", 00:15:11.009 "params": { 00:15:11.009 "small_pool_count": 8192, 00:15:11.009 "large_pool_count": 1024, 00:15:11.009 "small_bufsize": 8192, 00:15:11.009 "large_bufsize": 135168, 00:15:11.009 "enable_numa": false 00:15:11.009 } 00:15:11.009 } 00:15:11.009 ] 00:15:11.009 }, 00:15:11.009 { 00:15:11.009 "subsystem": "sock", 00:15:11.009 "config": [ 00:15:11.009 { 00:15:11.009 "method": "sock_set_default_impl", 00:15:11.009 "params": { 00:15:11.009 "impl_name": "uring" 00:15:11.009 } 00:15:11.009 }, 00:15:11.009 { 00:15:11.009 "method": "sock_impl_set_options", 00:15:11.009 "params": { 00:15:11.009 "impl_name": "ssl", 00:15:11.009 "recv_buf_size": 4096, 00:15:11.009 "send_buf_size": 4096, 00:15:11.009 "enable_recv_pipe": true, 00:15:11.009 "enable_quickack": false, 00:15:11.009 "enable_placement_id": 0, 00:15:11.009 "enable_zerocopy_send_server": true, 00:15:11.009 "enable_zerocopy_send_client": false, 00:15:11.009 "zerocopy_threshold": 0, 00:15:11.009 "tls_version": 0, 00:15:11.010 "enable_ktls": false 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "sock_impl_set_options", 00:15:11.010 "params": { 00:15:11.010 "impl_name": "posix", 00:15:11.010 "recv_buf_size": 2097152, 00:15:11.010 "send_buf_size": 2097152, 00:15:11.010 "enable_recv_pipe": true, 00:15:11.010 "enable_quickack": false, 00:15:11.010 "enable_placement_id": 0, 00:15:11.010 "enable_zerocopy_send_server": true, 00:15:11.010 "enable_zerocopy_send_client": false, 00:15:11.010 "zerocopy_threshold": 0, 00:15:11.010 "tls_version": 0, 00:15:11.010 "enable_ktls": false 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "sock_impl_set_options", 00:15:11.010 "params": { 00:15:11.010 "impl_name": "uring", 00:15:11.010 "recv_buf_size": 2097152, 00:15:11.010 "send_buf_size": 2097152, 00:15:11.010 "enable_recv_pipe": true, 00:15:11.010 "enable_quickack": false, 00:15:11.010 "enable_placement_id": 0, 00:15:11.010 "enable_zerocopy_send_server": false, 00:15:11.010 "enable_zerocopy_send_client": false, 00:15:11.010 "zerocopy_threshold": 0, 00:15:11.010 "tls_version": 0, 00:15:11.010 "enable_ktls": false 00:15:11.010 } 00:15:11.010 } 00:15:11.010 ] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "vmd", 00:15:11.010 "config": [] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "accel", 00:15:11.010 "config": [ 00:15:11.010 { 00:15:11.010 "method": "accel_set_options", 00:15:11.010 "params": { 00:15:11.010 "small_cache_size": 128, 00:15:11.010 "large_cache_size": 16, 00:15:11.010 "task_count": 2048, 00:15:11.010 "sequence_count": 2048, 00:15:11.010 "buf_count": 2048 00:15:11.010 } 00:15:11.010 } 00:15:11.010 ] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "bdev", 00:15:11.010 "config": [ 00:15:11.010 { 00:15:11.010 "method": "bdev_set_options", 00:15:11.010 "params": { 00:15:11.010 "bdev_io_pool_size": 65535, 00:15:11.010 "bdev_io_cache_size": 256, 00:15:11.010 "bdev_auto_examine": true, 00:15:11.010 "iobuf_small_cache_size": 128, 00:15:11.010 "iobuf_large_cache_size": 16 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_raid_set_options", 00:15:11.010 "params": { 00:15:11.010 "process_window_size_kb": 1024, 00:15:11.010 "process_max_bandwidth_mb_sec": 0 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_iscsi_set_options", 00:15:11.010 "params": { 00:15:11.010 "timeout_sec": 30 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_nvme_set_options", 00:15:11.010 "params": { 00:15:11.010 "action_on_timeout": "none", 00:15:11.010 "timeout_us": 0, 00:15:11.010 "timeout_admin_us": 0, 00:15:11.010 "keep_alive_timeout_ms": 10000, 00:15:11.010 "arbitration_burst": 0, 00:15:11.010 "low_priority_weight": 0, 00:15:11.010 "medium_priority_weight": 0, 00:15:11.010 "high_priority_weight": 0, 00:15:11.010 "nvme_adminq_poll_period_us": 10000, 00:15:11.010 "nvme_ioq_poll_period_us": 0, 00:15:11.010 "io_queue_requests": 0, 00:15:11.010 "delay_cmd_submit": true, 00:15:11.010 "transport_retry_count": 4, 00:15:11.010 "bdev_retry_count": 3, 00:15:11.010 "transport_ack_timeout": 0, 00:15:11.010 "ctrlr_loss_timeout_sec": 0, 00:15:11.010 "reconnect_delay_sec": 0, 00:15:11.010 "fast_io_fail_timeout_sec": 0, 00:15:11.010 "disable_auto_failback": false, 00:15:11.010 "generate_uuids": false, 00:15:11.010 "transport_tos": 0, 00:15:11.010 "nvme_error_stat": false, 00:15:11.010 "rdma_srq_size": 0, 00:15:11.010 "io_path_stat": false, 00:15:11.010 "allow_accel_sequence": false, 00:15:11.010 "rdma_max_cq_size": 0, 00:15:11.010 "rdma_cm_event_timeout_ms": 0, 00:15:11.010 "dhchap_digests": [ 00:15:11.010 "sha256", 00:15:11.010 "sha384", 00:15:11.010 "sha512" 00:15:11.010 ], 00:15:11.010 "dhchap_dhgroups": [ 00:15:11.010 "null", 00:15:11.010 "ffdhe2048", 00:15:11.010 "ffdhe3072", 00:15:11.010 "ffdhe4096", 00:15:11.010 "ffdhe6144", 00:15:11.010 "ffdhe8192" 00:15:11.010 ] 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_nvme_set_hotplug", 00:15:11.010 "params": { 00:15:11.010 "period_us": 100000, 00:15:11.010 "enable": false 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_malloc_create", 00:15:11.010 "params": { 00:15:11.010 "name": "malloc0", 00:15:11.010 "num_blocks": 8192, 00:15:11.010 "block_size": 4096, 00:15:11.010 "physical_block_size": 4096, 00:15:11.010 "uuid": "53b3edaa-7599-4be6-b599-b6aa549db0ca", 00:15:11.010 "optimal_io_boundary": 0, 00:15:11.010 "md_size": 0, 00:15:11.010 "dif_type": 0, 00:15:11.010 "dif_is_head_of_md": false, 00:15:11.010 "dif_pi_format": 0 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "bdev_wait_for_examine" 00:15:11.010 } 00:15:11.010 ] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "nbd", 00:15:11.010 "config": [] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "scheduler", 00:15:11.010 "config": [ 00:15:11.010 { 00:15:11.010 "method": "framework_set_scheduler", 00:15:11.010 "params": { 00:15:11.010 "name": "static" 00:15:11.010 } 00:15:11.010 } 00:15:11.010 ] 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "subsystem": "nvmf", 00:15:11.010 "config": [ 00:15:11.010 { 00:15:11.010 "method": "nvmf_set_config", 00:15:11.010 "params": { 00:15:11.010 "discovery_filter": "match_any", 00:15:11.010 "admin_cmd_passthru": { 00:15:11.010 "identify_ctrlr": false 00:15:11.010 }, 00:15:11.010 "dhchap_digests": [ 00:15:11.010 "sha256", 00:15:11.010 "sha384", 00:15:11.010 "sha512" 00:15:11.010 ], 00:15:11.010 "dhchap_dhgroups": [ 00:15:11.010 "null", 00:15:11.010 "ffdhe2048", 00:15:11.010 "ffdhe3072", 00:15:11.010 "ffdhe4096", 00:15:11.010 "ffdhe6144", 00:15:11.010 "ffdhe8192" 00:15:11.010 ] 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "nvmf_set_max_subsystems", 00:15:11.010 "params": { 00:15:11.010 "max_subsystems": 1024 00:15:11.010 } 00:15:11.010 }, 00:15:11.010 { 00:15:11.010 "method": "nvmf_set_crdt", 00:15:11.011 "params": { 00:15:11.011 "crdt1": 0, 00:15:11.011 "crdt2": 0, 00:15:11.011 "crdt3": 0 00:15:11.011 } 00:15:11.011 }, 00:15:11.011 { 00:15:11.011 "method": "nvmf_create_transport", 00:15:11.011 "params": { 00:15:11.011 "trtype": "TCP", 00:15:11.011 "max_queue_depth": 128, 00:15:11.011 "max_io_qpairs_per_ctrlr": 127, 00:15:11.011 "in_capsule_data_size": 4096, 00:15:11.011 "max_io_size": 131072, 00:15:11.011 "io_unit_size": 131072, 00:15:11.011 "max_aq_depth": 128, 00:15:11.011 "num_shared_buffers": 511, 00:15:11.011 "buf_cache_size": 4294967295, 00:15:11.011 "dif_insert_or_strip": false, 00:15:11.011 "zcopy": false, 00:15:11.011 "c2h_success": false, 00:15:11.011 "sock_priority": 0, 00:15:11.011 "abort_timeout_sec": 1, 00:15:11.011 "ack_timeout": 0, 00:15:11.011 "data_wr_pool_size": 0 00:15:11.011 } 00:15:11.011 }, 00:15:11.011 { 00:15:11.011 "method": "nvmf_create_subsystem", 00:15:11.011 "params": { 00:15:11.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.011 "allow_any_host": false, 00:15:11.011 "serial_number": "SPDK00000000000001", 00:15:11.011 "model_number": "SPDK bdev Controller", 00:15:11.011 "max_namespaces": 10, 00:15:11.011 "min_cntlid": 1, 00:15:11.011 "max_cntlid": 65519, 00:15:11.011 "ana_reporting": false 00:15:11.011 } 00:15:11.011 }, 00:15:11.011 { 00:15:11.011 "method": "nvmf_subsystem_add_host", 00:15:11.011 "params": { 00:15:11.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.011 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.011 "psk": "key0" 00:15:11.011 } 00:15:11.011 }, 00:15:11.011 { 00:15:11.011 "method": "nvmf_subsystem_add_ns", 00:15:11.011 "params": { 00:15:11.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.011 "namespace": { 00:15:11.011 "nsid": 1, 00:15:11.011 "bdev_name": "malloc0", 00:15:11.011 "nguid": "53B3EDAA75994BE6B599B6AA549DB0CA", 00:15:11.011 "uuid": "53b3edaa-7599-4be6-b599-b6aa549db0ca", 00:15:11.011 "no_auto_visible": false 00:15:11.011 } 00:15:11.011 } 00:15:11.011 }, 00:15:11.011 { 00:15:11.011 "method": "nvmf_subsystem_add_listener", 00:15:11.011 "params": { 00:15:11.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.011 "listen_address": { 00:15:11.011 "trtype": "TCP", 00:15:11.011 "adrfam": "IPv4", 00:15:11.011 "traddr": "10.0.0.3", 00:15:11.011 "trsvcid": "4420" 00:15:11.011 }, 00:15:11.011 "secure_channel": true 00:15:11.011 } 00:15:11.011 } 00:15:11.011 ] 00:15:11.011 } 00:15:11.011 ] 00:15:11.011 }' 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72505 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72505 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72505 ']' 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.011 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.011 [2024-11-20 13:34:22.871298] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:11.011 [2024-11-20 13:34:22.871387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.269 [2024-11-20 13:34:23.019785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.269 [2024-11-20 13:34:23.082661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.269 [2024-11-20 13:34:23.082726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.269 [2024-11-20 13:34:23.082739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.269 [2024-11-20 13:34:23.082747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.269 [2024-11-20 13:34:23.082754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.269 [2024-11-20 13:34:23.083266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.528 [2024-11-20 13:34:23.252269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.528 [2024-11-20 13:34:23.337503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.528 [2024-11-20 13:34:23.369444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:11.528 [2024-11-20 13:34:23.369676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72537 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72537 /var/tmp/bdevperf.sock 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72537 ']' 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.095 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:12.095 "subsystems": [ 00:15:12.095 { 00:15:12.095 "subsystem": "keyring", 00:15:12.095 "config": [ 00:15:12.095 { 00:15:12.095 "method": "keyring_file_add_key", 00:15:12.095 "params": { 00:15:12.095 "name": "key0", 00:15:12.095 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:12.095 } 00:15:12.095 } 00:15:12.095 ] 00:15:12.095 }, 00:15:12.095 { 00:15:12.095 "subsystem": "iobuf", 00:15:12.095 "config": [ 00:15:12.095 { 00:15:12.095 "method": "iobuf_set_options", 00:15:12.095 "params": { 00:15:12.095 "small_pool_count": 8192, 00:15:12.095 "large_pool_count": 1024, 00:15:12.095 "small_bufsize": 8192, 00:15:12.095 "large_bufsize": 135168, 00:15:12.095 "enable_numa": false 00:15:12.095 } 00:15:12.095 } 00:15:12.095 ] 00:15:12.095 }, 00:15:12.095 { 00:15:12.096 "subsystem": "sock", 00:15:12.096 "config": [ 00:15:12.096 { 00:15:12.096 "method": "sock_set_default_impl", 00:15:12.096 "params": { 00:15:12.096 "impl_name": "uring" 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "sock_impl_set_options", 00:15:12.096 "params": { 00:15:12.096 "impl_name": "ssl", 00:15:12.096 "recv_buf_size": 4096, 00:15:12.096 "send_buf_size": 4096, 00:15:12.096 "enable_recv_pipe": true, 00:15:12.096 "enable_quickack": false, 00:15:12.096 "enable_placement_id": 0, 00:15:12.096 "enable_zerocopy_send_server": true, 00:15:12.096 "enable_zerocopy_send_client": false, 00:15:12.096 "zerocopy_threshold": 0, 00:15:12.096 "tls_version": 0, 00:15:12.096 "enable_ktls": false 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "sock_impl_set_options", 00:15:12.096 "params": { 00:15:12.096 "impl_name": "posix", 00:15:12.096 "recv_buf_size": 2097152, 00:15:12.096 "send_buf_size": 2097152, 00:15:12.096 "enable_recv_pipe": true, 00:15:12.096 "enable_quickack": false, 00:15:12.096 "enable_placement_id": 0, 00:15:12.096 "enable_zerocopy_send_server": true, 00:15:12.096 "enable_zerocopy_send_client": false, 00:15:12.096 "zerocopy_threshold": 0, 00:15:12.096 "tls_version": 0, 00:15:12.096 "enable_ktls": false 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "sock_impl_set_options", 00:15:12.096 "params": { 00:15:12.096 "impl_name": "uring", 00:15:12.096 "recv_buf_size": 2097152, 00:15:12.096 "send_buf_size": 2097152, 00:15:12.096 "enable_recv_pipe": true, 00:15:12.096 "enable_quickack": false, 00:15:12.096 "enable_placement_id": 0, 00:15:12.096 "enable_zerocopy_send_server": false, 00:15:12.096 "enable_zerocopy_send_client": false, 00:15:12.096 "zerocopy_threshold": 0, 00:15:12.096 "tls_version": 0, 00:15:12.096 "enable_ktls": false 00:15:12.096 } 00:15:12.096 } 00:15:12.096 ] 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "subsystem": "vmd", 00:15:12.096 "config": [] 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "subsystem": "accel", 00:15:12.096 "config": [ 00:15:12.096 { 00:15:12.096 "method": "accel_set_options", 00:15:12.096 "params": { 00:15:12.096 "small_cache_size": 128, 00:15:12.096 "large_cache_size": 16, 00:15:12.096 "task_count": 2048, 00:15:12.096 "sequence_count": 2048, 00:15:12.096 "buf_count": 2048 00:15:12.096 } 00:15:12.096 } 00:15:12.096 ] 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "subsystem": "bdev", 00:15:12.096 "config": [ 00:15:12.096 { 00:15:12.096 "method": "bdev_set_options", 00:15:12.096 "params": { 00:15:12.096 "bdev_io_pool_size": 65535, 00:15:12.096 "bdev_io_cache_size": 256, 00:15:12.096 "bdev_auto_examine": true, 00:15:12.096 "iobuf_small_cache_size": 128, 00:15:12.096 "iobuf_large_cache_size": 16 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_raid_set_options", 00:15:12.096 "params": { 00:15:12.096 "process_window_size_kb": 1024, 00:15:12.096 "process_max_bandwidth_mb_sec": 0 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_iscsi_set_options", 00:15:12.096 "params": { 00:15:12.096 "timeout_sec": 30 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_nvme_set_options", 00:15:12.096 "params": { 00:15:12.096 "action_on_timeout": "none", 00:15:12.096 "timeout_us": 0, 00:15:12.096 "timeout_admin_us": 0, 00:15:12.096 "keep_alive_timeout_ms": 10000, 00:15:12.096 "arbitration_burst": 0, 00:15:12.096 "low_priority_weight": 0, 00:15:12.096 "medium_priority_weight": 0, 00:15:12.096 "high_priority_weight": 0, 00:15:12.096 "nvme_adminq_poll_period_us": 10000, 00:15:12.096 "nvme_ioq_poll_period_us": 0, 00:15:12.096 "io_queue_requests": 512, 00:15:12.096 "delay_cmd_submit": true, 00:15:12.096 "transport_retry_count": 4, 00:15:12.096 "bdev_retry_count": 3, 00:15:12.096 "transport_ack_timeout": 0, 00:15:12.096 "ctrlr_loss_timeout_sec": 0, 00:15:12.096 "reconnect_delay_sec": 0, 00:15:12.096 "fast_io_fail_timeout_sec": 0, 00:15:12.096 "disable_auto_failback": false, 00:15:12.096 "generate_uuids": false, 00:15:12.096 "transport_tos": 0, 00:15:12.096 "nvme_error_stat": false, 00:15:12.096 "rdma_srq_size": 0, 00:15:12.096 "io_path_stat": false, 00:15:12.096 "allow_accel_sequence": false, 00:15:12.096 "rdma_max_cq_size": 0, 00:15:12.096 "rdma_cm_event_timeout_ms": 0, 00:15:12.096 "dhchap_digests": [ 00:15:12.096 "sha256", 00:15:12.096 "sha384", 00:15:12.096 "sha512" 00:15:12.096 ], 00:15:12.096 "dhchap_dhgroups": [ 00:15:12.096 "null", 00:15:12.096 "ffdhe2048", 00:15:12.096 "ffdhe3072", 00:15:12.096 "ffdhe4096", 00:15:12.096 "ffdhe6144", 00:15:12.096 "ffdhe8192" 00:15:12.096 ] 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_nvme_attach_controller", 00:15:12.096 "params": { 00:15:12.096 "name": "TLSTEST", 00:15:12.096 "trtype": "TCP", 00:15:12.096 "adrfam": "IPv4", 00:15:12.096 "traddr": "10.0.0.3", 00:15:12.096 "trsvcid": "4420", 00:15:12.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.096 "prchk_reftag": false, 00:15:12.096 "prchk_guard": false, 00:15:12.096 "ctrlr_loss_timeout_sec": 0, 00:15:12.096 "reconnect_delay_sec": 0, 00:15:12.096 "fast_io_fail_timeout_sec": 0, 00:15:12.096 "psk": "key0", 00:15:12.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.096 "hdgst": false, 00:15:12.096 "ddgst": false, 00:15:12.096 "multipath": "multipath" 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_nvme_set_hotplug", 00:15:12.096 "params": { 00:15:12.096 "period_us": 100000, 00:15:12.096 "enable": false 00:15:12.096 } 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "method": "bdev_wait_for_examine" 00:15:12.096 } 00:15:12.096 ] 00:15:12.096 }, 00:15:12.096 { 00:15:12.096 "subsystem": "nbd", 00:15:12.096 "config": [] 00:15:12.096 } 00:15:12.096 ] 00:15:12.096 }' 00:15:12.096 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.096 [2024-11-20 13:34:24.020785] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:12.096 [2024-11-20 13:34:24.021070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72537 ] 00:15:12.355 [2024-11-20 13:34:24.172672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.355 [2024-11-20 13:34:24.240732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.614 [2024-11-20 13:34:24.380217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.614 [2024-11-20 13:34:24.432501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.185 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.185 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:13.185 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:13.444 Running I/O for 10 seconds... 00:15:15.317 3882.00 IOPS, 15.16 MiB/s [2024-11-20T13:34:28.212Z] 3948.50 IOPS, 15.42 MiB/s [2024-11-20T13:34:29.587Z] 3908.33 IOPS, 15.27 MiB/s [2024-11-20T13:34:30.522Z] 3911.75 IOPS, 15.28 MiB/s [2024-11-20T13:34:31.459Z] 3914.00 IOPS, 15.29 MiB/s [2024-11-20T13:34:32.394Z] 3910.83 IOPS, 15.28 MiB/s [2024-11-20T13:34:33.330Z] 3909.71 IOPS, 15.27 MiB/s [2024-11-20T13:34:34.264Z] 3939.62 IOPS, 15.39 MiB/s [2024-11-20T13:34:35.200Z] 3941.78 IOPS, 15.40 MiB/s [2024-11-20T13:34:35.200Z] 3949.30 IOPS, 15.43 MiB/s 00:15:23.243 Latency(us) 00:15:23.243 [2024-11-20T13:34:35.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.243 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:23.243 Verification LBA range: start 0x0 length 0x2000 00:15:23.243 TLSTESTn1 : 10.02 3954.93 15.45 0.00 0.00 32303.92 6672.76 25499.46 00:15:23.243 [2024-11-20T13:34:35.200Z] =================================================================================================================== 00:15:23.243 [2024-11-20T13:34:35.200Z] Total : 3954.93 15.45 0.00 0.00 32303.92 6672.76 25499.46 00:15:23.243 { 00:15:23.243 "results": [ 00:15:23.243 { 00:15:23.243 "job": "TLSTESTn1", 00:15:23.243 "core_mask": "0x4", 00:15:23.243 "workload": "verify", 00:15:23.243 "status": "finished", 00:15:23.243 "verify_range": { 00:15:23.243 "start": 0, 00:15:23.243 "length": 8192 00:15:23.243 }, 00:15:23.243 "queue_depth": 128, 00:15:23.243 "io_size": 4096, 00:15:23.243 "runtime": 10.017126, 00:15:23.243 "iops": 3954.9267923753778, 00:15:23.243 "mibps": 15.44893278271632, 00:15:23.243 "io_failed": 0, 00:15:23.243 "io_timeout": 0, 00:15:23.243 "avg_latency_us": 32303.920071043878, 00:15:23.243 "min_latency_us": 6672.756363636364, 00:15:23.243 "max_latency_us": 25499.46181818182 00:15:23.243 } 00:15:23.243 ], 00:15:23.243 "core_count": 1 00:15:23.243 } 00:15:23.243 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:23.243 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72537 00:15:23.243 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72537 ']' 00:15:23.243 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72537 00:15:23.243 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72537 00:15:23.502 killing process with pid 72537 00:15:23.502 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.502 00:15:23.502 Latency(us) 00:15:23.502 [2024-11-20T13:34:35.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.502 [2024-11-20T13:34:35.459Z] =================================================================================================================== 00:15:23.502 [2024-11-20T13:34:35.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72537' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72537 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72537 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72505 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72505 ']' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72505 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72505 00:15:23.502 killing process with pid 72505 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72505' 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72505 00:15:23.502 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72505 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72671 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72671 00:15:23.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72671 ']' 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.761 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 [2024-11-20 13:34:35.758814] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:24.019 [2024-11-20 13:34:35.759151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.019 [2024-11-20 13:34:35.917398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.277 [2024-11-20 13:34:35.987139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.277 [2024-11-20 13:34:35.987452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.277 [2024-11-20 13:34:35.987622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.277 [2024-11-20 13:34:35.987760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.277 [2024-11-20 13:34:35.987775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.277 [2024-11-20 13:34:35.988372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.277 [2024-11-20 13:34:36.046181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Dy3gAuYSbe 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Dy3gAuYSbe 00:15:25.212 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:25.469 [2024-11-20 13:34:37.170643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.469 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:25.728 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:25.986 [2024-11-20 13:34:37.814803] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:25.986 [2024-11-20 13:34:37.815055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.986 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:26.244 malloc0 00:15:26.244 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:26.502 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:26.761 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:27.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72734 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72734 /var/tmp/bdevperf.sock 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72734 ']' 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.019 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.019 [2024-11-20 13:34:38.961145] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:27.019 [2024-11-20 13:34:38.961438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72734 ] 00:15:27.278 [2024-11-20 13:34:39.104467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.278 [2024-11-20 13:34:39.164272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.278 [2024-11-20 13:34:39.220118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.212 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.212 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.212 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:28.470 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:28.727 [2024-11-20 13:34:40.439160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:28.727 nvme0n1 00:15:28.727 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.986 Running I/O for 1 seconds... 00:15:29.970 3908.00 IOPS, 15.27 MiB/s 00:15:29.970 Latency(us) 00:15:29.970 [2024-11-20T13:34:41.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.970 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.970 Verification LBA range: start 0x0 length 0x2000 00:15:29.970 nvme0n1 : 1.02 3939.38 15.39 0.00 0.00 32038.19 7626.01 22878.02 00:15:29.970 [2024-11-20T13:34:41.927Z] =================================================================================================================== 00:15:29.970 [2024-11-20T13:34:41.927Z] Total : 3939.38 15.39 0.00 0.00 32038.19 7626.01 22878.02 00:15:29.970 { 00:15:29.970 "results": [ 00:15:29.970 { 00:15:29.970 "job": "nvme0n1", 00:15:29.970 "core_mask": "0x2", 00:15:29.970 "workload": "verify", 00:15:29.970 "status": "finished", 00:15:29.970 "verify_range": { 00:15:29.970 "start": 0, 00:15:29.970 "length": 8192 00:15:29.970 }, 00:15:29.970 "queue_depth": 128, 00:15:29.970 "io_size": 4096, 00:15:29.970 "runtime": 1.024526, 00:15:29.970 "iops": 3939.3826999021985, 00:15:29.970 "mibps": 15.388213671492963, 00:15:29.970 "io_failed": 0, 00:15:29.970 "io_timeout": 0, 00:15:29.970 "avg_latency_us": 32038.18941526264, 00:15:29.970 "min_latency_us": 7626.007272727273, 00:15:29.970 "max_latency_us": 22878.02181818182 00:15:29.970 } 00:15:29.970 ], 00:15:29.970 "core_count": 1 00:15:29.970 } 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72734 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72734 ']' 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72734 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72734 00:15:29.970 killing process with pid 72734 00:15:29.970 Received shutdown signal, test time was about 1.000000 seconds 00:15:29.970 00:15:29.970 Latency(us) 00:15:29.970 [2024-11-20T13:34:41.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.970 [2024-11-20T13:34:41.927Z] =================================================================================================================== 00:15:29.970 [2024-11-20T13:34:41.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72734' 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72734 00:15:29.970 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72734 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72671 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72671 ']' 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72671 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.228 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72671 00:15:30.228 killing process with pid 72671 00:15:30.228 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.228 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.228 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72671' 00:15:30.228 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72671 00:15:30.228 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72671 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72785 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72785 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72785 ']' 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.486 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.486 [2024-11-20 13:34:42.287723] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:30.486 [2024-11-20 13:34:42.287831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.486 [2024-11-20 13:34:42.437617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.744 [2024-11-20 13:34:42.496758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.744 [2024-11-20 13:34:42.496818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.744 [2024-11-20 13:34:42.496831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.744 [2024-11-20 13:34:42.496839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.744 [2024-11-20 13:34:42.496847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.744 [2024-11-20 13:34:42.497300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.744 [2024-11-20 13:34:42.552447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.744 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.744 [2024-11-20 13:34:42.670244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.744 malloc0 00:15:31.003 [2024-11-20 13:34:42.701362] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:31.003 [2024-11-20 13:34:42.701615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:31.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72809 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72809 /var/tmp/bdevperf.sock 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72809 ']' 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.003 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.003 [2024-11-20 13:34:42.791579] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:31.003 [2024-11-20 13:34:42.791891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72809 ] 00:15:31.003 [2024-11-20 13:34:42.941565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.260 [2024-11-20 13:34:43.013392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.260 [2024-11-20 13:34:43.071524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.260 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.260 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:31.260 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dy3gAuYSbe 00:15:31.519 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:31.778 [2024-11-20 13:34:43.730322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:32.036 nvme0n1 00:15:32.036 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.036 Running I/O for 1 seconds... 00:15:33.412 3712.00 IOPS, 14.50 MiB/s 00:15:33.412 Latency(us) 00:15:33.412 [2024-11-20T13:34:45.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:33.412 Verification LBA range: start 0x0 length 0x2000 00:15:33.412 nvme0n1 : 1.03 3719.59 14.53 0.00 0.00 34012.94 8043.05 26571.87 00:15:33.412 [2024-11-20T13:34:45.369Z] =================================================================================================================== 00:15:33.412 [2024-11-20T13:34:45.369Z] Total : 3719.59 14.53 0.00 0.00 34012.94 8043.05 26571.87 00:15:33.412 { 00:15:33.412 "results": [ 00:15:33.412 { 00:15:33.412 "job": "nvme0n1", 00:15:33.412 "core_mask": "0x2", 00:15:33.412 "workload": "verify", 00:15:33.412 "status": "finished", 00:15:33.412 "verify_range": { 00:15:33.412 "start": 0, 00:15:33.412 "length": 8192 00:15:33.412 }, 00:15:33.412 "queue_depth": 128, 00:15:33.412 "io_size": 4096, 00:15:33.412 "runtime": 1.032372, 00:15:33.412 "iops": 3719.5894503144214, 00:15:33.412 "mibps": 14.529646290290708, 00:15:33.412 "io_failed": 0, 00:15:33.412 "io_timeout": 0, 00:15:33.412 "avg_latency_us": 34012.93575757576, 00:15:33.412 "min_latency_us": 8043.054545454545, 00:15:33.412 "max_latency_us": 26571.86909090909 00:15:33.412 } 00:15:33.412 ], 00:15:33.412 "core_count": 1 00:15:33.412 } 00:15:33.412 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:33.412 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.412 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.412 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.412 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:33.412 "subsystems": [ 00:15:33.412 { 00:15:33.412 "subsystem": "keyring", 00:15:33.412 "config": [ 00:15:33.412 { 00:15:33.412 "method": "keyring_file_add_key", 00:15:33.412 "params": { 00:15:33.412 "name": "key0", 00:15:33.412 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:33.412 } 00:15:33.412 } 00:15:33.412 ] 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "subsystem": "iobuf", 00:15:33.412 "config": [ 00:15:33.412 { 00:15:33.412 "method": "iobuf_set_options", 00:15:33.412 "params": { 00:15:33.412 "small_pool_count": 8192, 00:15:33.412 "large_pool_count": 1024, 00:15:33.412 "small_bufsize": 8192, 00:15:33.412 "large_bufsize": 135168, 00:15:33.412 "enable_numa": false 00:15:33.412 } 00:15:33.412 } 00:15:33.412 ] 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "subsystem": "sock", 00:15:33.412 "config": [ 00:15:33.412 { 00:15:33.412 "method": "sock_set_default_impl", 00:15:33.412 "params": { 00:15:33.412 "impl_name": "uring" 00:15:33.412 } 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "method": "sock_impl_set_options", 00:15:33.412 "params": { 00:15:33.412 "impl_name": "ssl", 00:15:33.412 "recv_buf_size": 4096, 00:15:33.412 "send_buf_size": 4096, 00:15:33.412 "enable_recv_pipe": true, 00:15:33.412 "enable_quickack": false, 00:15:33.412 "enable_placement_id": 0, 00:15:33.412 "enable_zerocopy_send_server": true, 00:15:33.412 "enable_zerocopy_send_client": false, 00:15:33.412 "zerocopy_threshold": 0, 00:15:33.412 "tls_version": 0, 00:15:33.412 "enable_ktls": false 00:15:33.412 } 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "method": "sock_impl_set_options", 00:15:33.412 "params": { 00:15:33.412 "impl_name": "posix", 00:15:33.412 "recv_buf_size": 2097152, 00:15:33.412 "send_buf_size": 2097152, 00:15:33.412 "enable_recv_pipe": true, 00:15:33.412 "enable_quickack": false, 00:15:33.412 "enable_placement_id": 0, 00:15:33.412 "enable_zerocopy_send_server": true, 00:15:33.412 "enable_zerocopy_send_client": false, 00:15:33.412 "zerocopy_threshold": 0, 00:15:33.412 "tls_version": 0, 00:15:33.412 "enable_ktls": false 00:15:33.412 } 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "method": "sock_impl_set_options", 00:15:33.412 "params": { 00:15:33.412 "impl_name": "uring", 00:15:33.412 "recv_buf_size": 2097152, 00:15:33.412 "send_buf_size": 2097152, 00:15:33.412 "enable_recv_pipe": true, 00:15:33.412 "enable_quickack": false, 00:15:33.412 "enable_placement_id": 0, 00:15:33.412 "enable_zerocopy_send_server": false, 00:15:33.412 "enable_zerocopy_send_client": false, 00:15:33.412 "zerocopy_threshold": 0, 00:15:33.412 "tls_version": 0, 00:15:33.412 "enable_ktls": false 00:15:33.412 } 00:15:33.412 } 00:15:33.412 ] 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "subsystem": "vmd", 00:15:33.412 "config": [] 00:15:33.412 }, 00:15:33.412 { 00:15:33.412 "subsystem": "accel", 00:15:33.412 "config": [ 00:15:33.412 { 00:15:33.412 "method": "accel_set_options", 00:15:33.413 "params": { 00:15:33.413 "small_cache_size": 128, 00:15:33.413 "large_cache_size": 16, 00:15:33.413 "task_count": 2048, 00:15:33.413 "sequence_count": 2048, 00:15:33.413 "buf_count": 2048 00:15:33.413 } 00:15:33.413 } 00:15:33.413 ] 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "subsystem": "bdev", 00:15:33.413 "config": [ 00:15:33.413 { 00:15:33.413 "method": "bdev_set_options", 00:15:33.413 "params": { 00:15:33.413 "bdev_io_pool_size": 65535, 00:15:33.413 "bdev_io_cache_size": 256, 00:15:33.413 "bdev_auto_examine": true, 00:15:33.413 "iobuf_small_cache_size": 128, 00:15:33.413 "iobuf_large_cache_size": 16 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_raid_set_options", 00:15:33.413 "params": { 00:15:33.413 "process_window_size_kb": 1024, 00:15:33.413 "process_max_bandwidth_mb_sec": 0 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_iscsi_set_options", 00:15:33.413 "params": { 00:15:33.413 "timeout_sec": 30 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_nvme_set_options", 00:15:33.413 "params": { 00:15:33.413 "action_on_timeout": "none", 00:15:33.413 "timeout_us": 0, 00:15:33.413 "timeout_admin_us": 0, 00:15:33.413 "keep_alive_timeout_ms": 10000, 00:15:33.413 "arbitration_burst": 0, 00:15:33.413 "low_priority_weight": 0, 00:15:33.413 "medium_priority_weight": 0, 00:15:33.413 "high_priority_weight": 0, 00:15:33.413 "nvme_adminq_poll_period_us": 10000, 00:15:33.413 "nvme_ioq_poll_period_us": 0, 00:15:33.413 "io_queue_requests": 0, 00:15:33.413 "delay_cmd_submit": true, 00:15:33.413 "transport_retry_count": 4, 00:15:33.413 "bdev_retry_count": 3, 00:15:33.413 "transport_ack_timeout": 0, 00:15:33.413 "ctrlr_loss_timeout_sec": 0, 00:15:33.413 "reconnect_delay_sec": 0, 00:15:33.413 "fast_io_fail_timeout_sec": 0, 00:15:33.413 "disable_auto_failback": false, 00:15:33.413 "generate_uuids": false, 00:15:33.413 "transport_tos": 0, 00:15:33.413 "nvme_error_stat": false, 00:15:33.413 "rdma_srq_size": 0, 00:15:33.413 "io_path_stat": false, 00:15:33.413 "allow_accel_sequence": false, 00:15:33.413 "rdma_max_cq_size": 0, 00:15:33.413 "rdma_cm_event_timeout_ms": 0, 00:15:33.413 "dhchap_digests": [ 00:15:33.413 "sha256", 00:15:33.413 "sha384", 00:15:33.413 "sha512" 00:15:33.413 ], 00:15:33.413 "dhchap_dhgroups": [ 00:15:33.413 "null", 00:15:33.413 "ffdhe2048", 00:15:33.413 "ffdhe3072", 00:15:33.413 "ffdhe4096", 00:15:33.413 "ffdhe6144", 00:15:33.413 "ffdhe8192" 00:15:33.413 ] 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_nvme_set_hotplug", 00:15:33.413 "params": { 00:15:33.413 "period_us": 100000, 00:15:33.413 "enable": false 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_malloc_create", 00:15:33.413 "params": { 00:15:33.413 "name": "malloc0", 00:15:33.413 "num_blocks": 8192, 00:15:33.413 "block_size": 4096, 00:15:33.413 "physical_block_size": 4096, 00:15:33.413 "uuid": "b8d8cf17-84d1-46af-90f8-99fbb0b19dcc", 00:15:33.413 "optimal_io_boundary": 0, 00:15:33.413 "md_size": 0, 00:15:33.413 "dif_type": 0, 00:15:33.413 "dif_is_head_of_md": false, 00:15:33.413 "dif_pi_format": 0 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "bdev_wait_for_examine" 00:15:33.413 } 00:15:33.413 ] 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "subsystem": "nbd", 00:15:33.413 "config": [] 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "subsystem": "scheduler", 00:15:33.413 "config": [ 00:15:33.413 { 00:15:33.413 "method": "framework_set_scheduler", 00:15:33.413 "params": { 00:15:33.413 "name": "static" 00:15:33.413 } 00:15:33.413 } 00:15:33.413 ] 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "subsystem": "nvmf", 00:15:33.413 "config": [ 00:15:33.413 { 00:15:33.413 "method": "nvmf_set_config", 00:15:33.413 "params": { 00:15:33.413 "discovery_filter": "match_any", 00:15:33.413 "admin_cmd_passthru": { 00:15:33.413 "identify_ctrlr": false 00:15:33.413 }, 00:15:33.413 "dhchap_digests": [ 00:15:33.413 "sha256", 00:15:33.413 "sha384", 00:15:33.413 "sha512" 00:15:33.413 ], 00:15:33.413 "dhchap_dhgroups": [ 00:15:33.413 "null", 00:15:33.413 "ffdhe2048", 00:15:33.413 "ffdhe3072", 00:15:33.413 "ffdhe4096", 00:15:33.413 "ffdhe6144", 00:15:33.413 "ffdhe8192" 00:15:33.413 ] 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_set_max_subsystems", 00:15:33.413 "params": { 00:15:33.413 "max_subsystems": 1024 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_set_crdt", 00:15:33.413 "params": { 00:15:33.413 "crdt1": 0, 00:15:33.413 "crdt2": 0, 00:15:33.413 "crdt3": 0 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_create_transport", 00:15:33.413 "params": { 00:15:33.413 "trtype": "TCP", 00:15:33.413 "max_queue_depth": 128, 00:15:33.413 "max_io_qpairs_per_ctrlr": 127, 00:15:33.413 "in_capsule_data_size": 4096, 00:15:33.413 "max_io_size": 131072, 00:15:33.413 "io_unit_size": 131072, 00:15:33.413 "max_aq_depth": 128, 00:15:33.413 "num_shared_buffers": 511, 00:15:33.413 "buf_cache_size": 4294967295, 00:15:33.413 "dif_insert_or_strip": false, 00:15:33.413 "zcopy": false, 00:15:33.413 "c2h_success": false, 00:15:33.413 "sock_priority": 0, 00:15:33.413 "abort_timeout_sec": 1, 00:15:33.413 "ack_timeout": 0, 00:15:33.413 "data_wr_pool_size": 0 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_create_subsystem", 00:15:33.413 "params": { 00:15:33.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.413 "allow_any_host": false, 00:15:33.413 "serial_number": "00000000000000000000", 00:15:33.413 "model_number": "SPDK bdev Controller", 00:15:33.413 "max_namespaces": 32, 00:15:33.413 "min_cntlid": 1, 00:15:33.413 "max_cntlid": 65519, 00:15:33.413 "ana_reporting": false 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_subsystem_add_host", 00:15:33.413 "params": { 00:15:33.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.413 "host": "nqn.2016-06.io.spdk:host1", 00:15:33.413 "psk": "key0" 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_subsystem_add_ns", 00:15:33.413 "params": { 00:15:33.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.413 "namespace": { 00:15:33.413 "nsid": 1, 00:15:33.413 "bdev_name": "malloc0", 00:15:33.413 "nguid": "B8D8CF1784D146AF90F899FBB0B19DCC", 00:15:33.413 "uuid": "b8d8cf17-84d1-46af-90f8-99fbb0b19dcc", 00:15:33.413 "no_auto_visible": false 00:15:33.413 } 00:15:33.413 } 00:15:33.413 }, 00:15:33.413 { 00:15:33.413 "method": "nvmf_subsystem_add_listener", 00:15:33.413 "params": { 00:15:33.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.413 "listen_address": { 00:15:33.413 "trtype": "TCP", 00:15:33.413 "adrfam": "IPv4", 00:15:33.413 "traddr": "10.0.0.3", 00:15:33.413 "trsvcid": "4420" 00:15:33.413 }, 00:15:33.413 "secure_channel": false, 00:15:33.413 "sock_impl": "ssl" 00:15:33.413 } 00:15:33.413 } 00:15:33.413 ] 00:15:33.413 } 00:15:33.413 ] 00:15:33.413 }' 00:15:33.413 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:33.673 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:33.673 "subsystems": [ 00:15:33.673 { 00:15:33.673 "subsystem": "keyring", 00:15:33.673 "config": [ 00:15:33.673 { 00:15:33.673 "method": "keyring_file_add_key", 00:15:33.673 "params": { 00:15:33.673 "name": "key0", 00:15:33.673 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:33.673 } 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "iobuf", 00:15:33.673 "config": [ 00:15:33.673 { 00:15:33.673 "method": "iobuf_set_options", 00:15:33.673 "params": { 00:15:33.673 "small_pool_count": 8192, 00:15:33.673 "large_pool_count": 1024, 00:15:33.673 "small_bufsize": 8192, 00:15:33.673 "large_bufsize": 135168, 00:15:33.673 "enable_numa": false 00:15:33.673 } 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "sock", 00:15:33.673 "config": [ 00:15:33.673 { 00:15:33.673 "method": "sock_set_default_impl", 00:15:33.673 "params": { 00:15:33.673 "impl_name": "uring" 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "sock_impl_set_options", 00:15:33.673 "params": { 00:15:33.673 "impl_name": "ssl", 00:15:33.673 "recv_buf_size": 4096, 00:15:33.673 "send_buf_size": 4096, 00:15:33.673 "enable_recv_pipe": true, 00:15:33.673 "enable_quickack": false, 00:15:33.673 "enable_placement_id": 0, 00:15:33.673 "enable_zerocopy_send_server": true, 00:15:33.673 "enable_zerocopy_send_client": false, 00:15:33.673 "zerocopy_threshold": 0, 00:15:33.673 "tls_version": 0, 00:15:33.673 "enable_ktls": false 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "sock_impl_set_options", 00:15:33.673 "params": { 00:15:33.673 "impl_name": "posix", 00:15:33.673 "recv_buf_size": 2097152, 00:15:33.673 "send_buf_size": 2097152, 00:15:33.673 "enable_recv_pipe": true, 00:15:33.673 "enable_quickack": false, 00:15:33.673 "enable_placement_id": 0, 00:15:33.673 "enable_zerocopy_send_server": true, 00:15:33.673 "enable_zerocopy_send_client": false, 00:15:33.673 "zerocopy_threshold": 0, 00:15:33.673 "tls_version": 0, 00:15:33.673 "enable_ktls": false 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "sock_impl_set_options", 00:15:33.673 "params": { 00:15:33.673 "impl_name": "uring", 00:15:33.673 "recv_buf_size": 2097152, 00:15:33.673 "send_buf_size": 2097152, 00:15:33.673 "enable_recv_pipe": true, 00:15:33.673 "enable_quickack": false, 00:15:33.673 "enable_placement_id": 0, 00:15:33.673 "enable_zerocopy_send_server": false, 00:15:33.673 "enable_zerocopy_send_client": false, 00:15:33.673 "zerocopy_threshold": 0, 00:15:33.673 "tls_version": 0, 00:15:33.673 "enable_ktls": false 00:15:33.673 } 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "vmd", 00:15:33.673 "config": [] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "accel", 00:15:33.673 "config": [ 00:15:33.673 { 00:15:33.673 "method": "accel_set_options", 00:15:33.673 "params": { 00:15:33.673 "small_cache_size": 128, 00:15:33.673 "large_cache_size": 16, 00:15:33.673 "task_count": 2048, 00:15:33.673 "sequence_count": 2048, 00:15:33.673 "buf_count": 2048 00:15:33.673 } 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "bdev", 00:15:33.673 "config": [ 00:15:33.673 { 00:15:33.673 "method": "bdev_set_options", 00:15:33.673 "params": { 00:15:33.673 "bdev_io_pool_size": 65535, 00:15:33.673 "bdev_io_cache_size": 256, 00:15:33.673 "bdev_auto_examine": true, 00:15:33.673 "iobuf_small_cache_size": 128, 00:15:33.673 "iobuf_large_cache_size": 16 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_raid_set_options", 00:15:33.673 "params": { 00:15:33.673 "process_window_size_kb": 1024, 00:15:33.673 "process_max_bandwidth_mb_sec": 0 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_iscsi_set_options", 00:15:33.673 "params": { 00:15:33.673 "timeout_sec": 30 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_nvme_set_options", 00:15:33.673 "params": { 00:15:33.673 "action_on_timeout": "none", 00:15:33.673 "timeout_us": 0, 00:15:33.673 "timeout_admin_us": 0, 00:15:33.673 "keep_alive_timeout_ms": 10000, 00:15:33.673 "arbitration_burst": 0, 00:15:33.673 "low_priority_weight": 0, 00:15:33.673 "medium_priority_weight": 0, 00:15:33.673 "high_priority_weight": 0, 00:15:33.673 "nvme_adminq_poll_period_us": 10000, 00:15:33.673 "nvme_ioq_poll_period_us": 0, 00:15:33.673 "io_queue_requests": 512, 00:15:33.673 "delay_cmd_submit": true, 00:15:33.673 "transport_retry_count": 4, 00:15:33.673 "bdev_retry_count": 3, 00:15:33.673 "transport_ack_timeout": 0, 00:15:33.673 "ctrlr_loss_timeout_sec": 0, 00:15:33.673 "reconnect_delay_sec": 0, 00:15:33.673 "fast_io_fail_timeout_sec": 0, 00:15:33.673 "disable_auto_failback": false, 00:15:33.673 "generate_uuids": false, 00:15:33.673 "transport_tos": 0, 00:15:33.673 "nvme_error_stat": false, 00:15:33.673 "rdma_srq_size": 0, 00:15:33.673 "io_path_stat": false, 00:15:33.673 "allow_accel_sequence": false, 00:15:33.673 "rdma_max_cq_size": 0, 00:15:33.673 "rdma_cm_event_timeout_ms": 0, 00:15:33.673 "dhchap_digests": [ 00:15:33.673 "sha256", 00:15:33.673 "sha384", 00:15:33.673 "sha512" 00:15:33.673 ], 00:15:33.673 "dhchap_dhgroups": [ 00:15:33.673 "null", 00:15:33.673 "ffdhe2048", 00:15:33.673 "ffdhe3072", 00:15:33.673 "ffdhe4096", 00:15:33.673 "ffdhe6144", 00:15:33.673 "ffdhe8192" 00:15:33.673 ] 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_nvme_attach_controller", 00:15:33.673 "params": { 00:15:33.673 "name": "nvme0", 00:15:33.673 "trtype": "TCP", 00:15:33.673 "adrfam": "IPv4", 00:15:33.673 "traddr": "10.0.0.3", 00:15:33.673 "trsvcid": "4420", 00:15:33.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.673 "prchk_reftag": false, 00:15:33.673 "prchk_guard": false, 00:15:33.673 "ctrlr_loss_timeout_sec": 0, 00:15:33.673 "reconnect_delay_sec": 0, 00:15:33.673 "fast_io_fail_timeout_sec": 0, 00:15:33.673 "psk": "key0", 00:15:33.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.673 "hdgst": false, 00:15:33.673 "ddgst": false, 00:15:33.673 "multipath": "multipath" 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_nvme_set_hotplug", 00:15:33.673 "params": { 00:15:33.673 "period_us": 100000, 00:15:33.673 "enable": false 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_enable_histogram", 00:15:33.673 "params": { 00:15:33.673 "name": "nvme0n1", 00:15:33.673 "enable": true 00:15:33.673 } 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "method": "bdev_wait_for_examine" 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }, 00:15:33.673 { 00:15:33.673 "subsystem": "nbd", 00:15:33.673 "config": [] 00:15:33.673 } 00:15:33.673 ] 00:15:33.673 }' 00:15:33.673 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72809 00:15:33.673 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72809 ']' 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72809 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72809 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:33.674 killing process with pid 72809 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72809' 00:15:33.674 Received shutdown signal, test time was about 1.000000 seconds 00:15:33.674 00:15:33.674 Latency(us) 00:15:33.674 [2024-11-20T13:34:45.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.674 [2024-11-20T13:34:45.631Z] =================================================================================================================== 00:15:33.674 [2024-11-20T13:34:45.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72809 00:15:33.674 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72809 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72785 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72785 ']' 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72785 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72785 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.933 killing process with pid 72785 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72785' 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72785 00:15:33.933 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72785 00:15:34.192 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:34.192 "subsystems": [ 00:15:34.192 { 00:15:34.192 "subsystem": "keyring", 00:15:34.192 "config": [ 00:15:34.192 { 00:15:34.192 "method": "keyring_file_add_key", 00:15:34.192 "params": { 00:15:34.192 "name": "key0", 00:15:34.192 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:34.192 } 00:15:34.192 } 00:15:34.192 ] 00:15:34.192 }, 00:15:34.192 { 00:15:34.192 "subsystem": "iobuf", 00:15:34.192 "config": [ 00:15:34.192 { 00:15:34.192 "method": "iobuf_set_options", 00:15:34.192 "params": { 00:15:34.192 "small_pool_count": 8192, 00:15:34.192 "large_pool_count": 1024, 00:15:34.192 "small_bufsize": 8192, 00:15:34.192 "large_bufsize": 135168, 00:15:34.192 "enable_numa": false 00:15:34.192 } 00:15:34.192 } 00:15:34.192 ] 00:15:34.192 }, 00:15:34.192 { 00:15:34.192 "subsystem": "sock", 00:15:34.192 "config": [ 00:15:34.192 { 00:15:34.192 "method": "sock_set_default_impl", 00:15:34.192 "params": { 00:15:34.192 "impl_name": "uring" 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "sock_impl_set_options", 00:15:34.193 "params": { 00:15:34.193 "impl_name": "ssl", 00:15:34.193 "recv_buf_size": 4096, 00:15:34.193 "send_buf_size": 4096, 00:15:34.193 "enable_recv_pipe": true, 00:15:34.193 "enable_quickack": false, 00:15:34.193 "enable_placement_id": 0, 00:15:34.193 "enable_zerocopy_send_server": true, 00:15:34.193 "enable_zerocopy_send_client": false, 00:15:34.193 "zerocopy_threshold": 0, 00:15:34.193 "tls_version": 0, 00:15:34.193 "enable_ktls": false 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "sock_impl_set_options", 00:15:34.193 "params": { 00:15:34.193 "impl_name": "posix", 00:15:34.193 "recv_buf_size": 2097152, 00:15:34.193 "send_buf_size": 2097152, 00:15:34.193 "enable_recv_pipe": true, 00:15:34.193 "enable_quickack": false, 00:15:34.193 "enable_placement_id": 0, 00:15:34.193 "enable_zerocopy_send_server": true, 00:15:34.193 "enable_zerocopy_send_client": false, 00:15:34.193 "zerocopy_threshold": 0, 00:15:34.193 "tls_version": 0, 00:15:34.193 "enable_ktls": false 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "sock_impl_set_options", 00:15:34.193 "params": { 00:15:34.193 "impl_name": "uring", 00:15:34.193 "recv_buf_size": 2097152, 00:15:34.193 "send_buf_size": 2097152, 00:15:34.193 "enable_recv_pipe": true, 00:15:34.193 "enable_quickack": false, 00:15:34.193 "enable_placement_id": 0, 00:15:34.193 "enable_zerocopy_send_server": false, 00:15:34.193 "enable_zerocopy_send_client": false, 00:15:34.193 "zerocopy_threshold": 0, 00:15:34.193 "tls_version": 0, 00:15:34.193 "enable_ktls": false 00:15:34.193 } 00:15:34.193 } 00:15:34.193 ] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "vmd", 00:15:34.193 "config": [] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "accel", 00:15:34.193 "config": [ 00:15:34.193 { 00:15:34.193 "method": "accel_set_options", 00:15:34.193 "params": { 00:15:34.193 "small_cache_size": 128, 00:15:34.193 "large_cache_size": 16, 00:15:34.193 "task_count": 2048, 00:15:34.193 "sequence_count": 2048, 00:15:34.193 "buf_count": 2048 00:15:34.193 } 00:15:34.193 } 00:15:34.193 ] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "bdev", 00:15:34.193 "config": [ 00:15:34.193 { 00:15:34.193 "method": "bdev_set_options", 00:15:34.193 "params": { 00:15:34.193 "bdev_io_pool_size": 65535, 00:15:34.193 "bdev_io_cache_size": 256, 00:15:34.193 "bdev_auto_examine": true, 00:15:34.193 "iobuf_small_cache_size": 128, 00:15:34.193 "iobuf_large_cache_size": 16 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_raid_set_options", 00:15:34.193 "params": { 00:15:34.193 "process_window_size_kb": 1024, 00:15:34.193 "process_max_bandwidth_mb_sec": 0 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_iscsi_set_options", 00:15:34.193 "params": { 00:15:34.193 "timeout_sec": 30 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_nvme_set_options", 00:15:34.193 "params": { 00:15:34.193 "action_on_timeout": "none", 00:15:34.193 "timeout_us": 0, 00:15:34.193 "timeout_admin_us": 0, 00:15:34.193 "keep_alive_timeout_ms": 10000, 00:15:34.193 "arbitration_burst": 0, 00:15:34.193 "low_priority_weight": 0, 00:15:34.193 "medium_priority_weight": 0, 00:15:34.193 "high_priority_weight": 0, 00:15:34.193 "nvme_adminq_poll_period_us": 10000, 00:15:34.193 "nvme_ioq_poll_period_us": 0, 00:15:34.193 "io_queue_requests": 0, 00:15:34.193 "delay_cmd_submit": true, 00:15:34.193 "transport_retry_count": 4, 00:15:34.193 "bdev_retry_count": 3, 00:15:34.193 "transport_ack_timeout": 0, 00:15:34.193 "ctrlr_loss_timeout_sec": 0, 00:15:34.193 "reconnect_delay_sec": 0, 00:15:34.193 "fast_io_fail_timeout_sec": 0, 00:15:34.193 "disable_auto_failback": false, 00:15:34.193 "generate_uuids": false, 00:15:34.193 "transport_tos": 0, 00:15:34.193 "nvme_error_stat": false, 00:15:34.193 "rdma_srq_size": 0, 00:15:34.193 "io_path_stat": false, 00:15:34.193 "allow_accel_sequence": false, 00:15:34.193 "rdma_max_cq_size": 0, 00:15:34.193 "rdma_cm_event_timeout_ms": 0, 00:15:34.193 "dhchap_digests": [ 00:15:34.193 "sha256", 00:15:34.193 "sha384", 00:15:34.193 "sha512" 00:15:34.193 ], 00:15:34.193 "dhchap_dhgroups": [ 00:15:34.193 "null", 00:15:34.193 "ffdhe2048", 00:15:34.193 "ffdhe3072", 00:15:34.193 "ffdhe4096", 00:15:34.193 "ffdhe6144", 00:15:34.193 "ffdhe8192" 00:15:34.193 ] 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_nvme_set_hotplug", 00:15:34.193 "params": { 00:15:34.193 "period_us": 100000, 00:15:34.193 "enable": false 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_malloc_create", 00:15:34.193 "params": { 00:15:34.193 "name": "malloc0", 00:15:34.193 "num_blocks": 8192, 00:15:34.193 "block_size": 4096, 00:15:34.193 "physical_block_size": 4096, 00:15:34.193 "uuid": "b8d8cf17-84d1-46af-90f8-99fbb0b19dcc", 00:15:34.193 "optimal_io_boundary": 0, 00:15:34.193 "md_size": 0, 00:15:34.193 "dif_type": 0, 00:15:34.193 "dif_is_head_of_md": false, 00:15:34.193 "dif_pi_format": 0 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "bdev_wait_for_examine" 00:15:34.193 } 00:15:34.193 ] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "nbd", 00:15:34.193 "config": [] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "scheduler", 00:15:34.193 "config": [ 00:15:34.193 { 00:15:34.193 "method": "framework_set_scheduler", 00:15:34.193 "params": { 00:15:34.193 "name": "static" 00:15:34.193 } 00:15:34.193 } 00:15:34.193 ] 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "subsystem": "nvmf", 00:15:34.193 "config": [ 00:15:34.193 { 00:15:34.193 "method": "nvmf_set_config", 00:15:34.193 "params": { 00:15:34.193 "discovery_filter": "match_any", 00:15:34.193 "admin_cmd_passthru": { 00:15:34.193 "identify_ctrlr": false 00:15:34.193 }, 00:15:34.193 "dhchap_digests": [ 00:15:34.193 "sha256", 00:15:34.193 "sha384", 00:15:34.193 "sha512" 00:15:34.193 ], 00:15:34.193 "dhchap_dhgroups": [ 00:15:34.193 "null", 00:15:34.193 "ffdhe2048", 00:15:34.193 "ffdhe3072", 00:15:34.193 "ffdhe4096", 00:15:34.193 "ffdhe6144", 00:15:34.193 "ffdhe8192" 00:15:34.193 ] 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "nvmf_set_max_subsyste 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:34.193 ms", 00:15:34.193 "params": { 00:15:34.193 "max_subsystems": 1024 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "nvmf_set_crdt", 00:15:34.193 "params": { 00:15:34.193 "crdt1": 0, 00:15:34.193 "crdt2": 0, 00:15:34.193 "crdt3": 0 00:15:34.193 } 00:15:34.193 }, 00:15:34.193 { 00:15:34.193 "method": "nvmf_create_transport", 00:15:34.193 "params": { 00:15:34.193 "trtype": "TCP", 00:15:34.193 "max_queue_depth": 128, 00:15:34.193 "max_io_qpairs_per_ctrlr": 127, 00:15:34.193 "in_capsule_data_size": 4096, 00:15:34.193 "max_io_size": 131072, 00:15:34.193 "io_unit_size": 131072, 00:15:34.193 "max_aq_depth": 128, 00:15:34.193 "num_shared_buffers": 511, 00:15:34.193 "buf_cache_size": 4294967295, 00:15:34.193 "dif_insert_or_strip": false, 00:15:34.193 "zcopy": false, 00:15:34.193 "c2h_success": false, 00:15:34.193 "sock_priority": 0, 00:15:34.193 "abort_timeout_sec": 1, 00:15:34.193 "ack_timeout": 0, 00:15:34.193 "data_wr_pool_size": 0 00:15:34.193 } 00:15:34.193 }, 00:15:34.194 { 00:15:34.194 "method": "nvmf_create_subsystem", 00:15:34.194 "params": { 00:15:34.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.194 "allow_any_host": false, 00:15:34.194 "serial_number": "00000000000000000000", 00:15:34.194 "model_number": "SPDK bdev Controller", 00:15:34.194 "max_namespaces": 32, 00:15:34.194 "min_cntlid": 1, 00:15:34.194 "max_cntlid": 65519, 00:15:34.194 "ana_reporting": false 00:15:34.194 } 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "method": "nvmf_subsystem_add_host", 00:15:34.194 "params": { 00:15:34.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.194 "host": "nqn.2016-06.io.spdk:host1", 00:15:34.194 "psk": "key0" 00:15:34.194 } 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "method": "nvmf_subsystem_add_ns", 00:15:34.194 "params": { 00:15:34.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.194 "namespace": { 00:15:34.194 "nsid": 1, 00:15:34.194 "bdev_name": "malloc0", 00:15:34.194 "nguid": "B8D8CF1784D146AF90F899FBB0B19DCC", 00:15:34.194 "uuid": "b8d8cf17-84d1-46af-90f8-99fbb0b19dcc", 00:15:34.194 "no_auto_visible": false 00:15:34.194 } 00:15:34.194 } 00:15:34.194 }, 00:15:34.194 { 00:15:34.194 "method": "nvmf_subsystem_add_listener", 00:15:34.194 "params": { 00:15:34.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.194 "listen_address": { 00:15:34.194 "trtype": "TCP", 00:15:34.194 "adrfam": "IPv4", 00:15:34.194 "traddr": "10.0.0.3", 00:15:34.194 "trsvcid": "4420" 00:15:34.194 }, 00:15:34.194 "secure_channel": false, 00:15:34.194 "sock_impl": "ssl" 00:15:34.194 } 00:15:34.194 } 00:15:34.194 ] 00:15:34.194 } 00:15:34.194 ] 00:15:34.194 }' 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72861 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72861 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72861 ']' 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.194 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.194 [2024-11-20 13:34:46.079462] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:34.194 [2024-11-20 13:34:46.079571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.454 [2024-11-20 13:34:46.230170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.454 [2024-11-20 13:34:46.294861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.454 [2024-11-20 13:34:46.294921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.454 [2024-11-20 13:34:46.294949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.454 [2024-11-20 13:34:46.294960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.454 [2024-11-20 13:34:46.294968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.454 [2024-11-20 13:34:46.295469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.713 [2024-11-20 13:34:46.465909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.713 [2024-11-20 13:34:46.548914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.713 [2024-11-20 13:34:46.580866] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.713 [2024-11-20 13:34:46.581104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72895 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72895 /var/tmp/bdevperf.sock 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72895 ']' 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.280 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:35.281 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:35.281 "subsystems": [ 00:15:35.281 { 00:15:35.281 "subsystem": "keyring", 00:15:35.281 "config": [ 00:15:35.281 { 00:15:35.281 "method": "keyring_file_add_key", 00:15:35.281 "params": { 00:15:35.281 "name": "key0", 00:15:35.281 "path": "/tmp/tmp.Dy3gAuYSbe" 00:15:35.281 } 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "iobuf", 00:15:35.281 "config": [ 00:15:35.281 { 00:15:35.281 "method": "iobuf_set_options", 00:15:35.281 "params": { 00:15:35.281 "small_pool_count": 8192, 00:15:35.281 "large_pool_count": 1024, 00:15:35.281 "small_bufsize": 8192, 00:15:35.281 "large_bufsize": 135168, 00:15:35.281 "enable_numa": false 00:15:35.281 } 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "sock", 00:15:35.281 "config": [ 00:15:35.281 { 00:15:35.281 "method": "sock_set_default_impl", 00:15:35.281 "params": { 00:15:35.281 "impl_name": "uring" 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "sock_impl_set_options", 00:15:35.281 "params": { 00:15:35.281 "impl_name": "ssl", 00:15:35.281 "recv_buf_size": 4096, 00:15:35.281 "send_buf_size": 4096, 00:15:35.281 "enable_recv_pipe": true, 00:15:35.281 "enable_quickack": false, 00:15:35.281 "enable_placement_id": 0, 00:15:35.281 "enable_zerocopy_send_server": true, 00:15:35.281 "enable_zerocopy_send_client": false, 00:15:35.281 "zerocopy_threshold": 0, 00:15:35.281 "tls_version": 0, 00:15:35.281 "enable_ktls": false 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "sock_impl_set_options", 00:15:35.281 "params": { 00:15:35.281 "impl_name": "posix", 00:15:35.281 "recv_buf_size": 2097152, 00:15:35.281 "send_buf_size": 2097152, 00:15:35.281 "enable_recv_pipe": true, 00:15:35.281 "enable_quickack": false, 00:15:35.281 "enable_placement_id": 0, 00:15:35.281 "enable_zerocopy_send_server": true, 00:15:35.281 "enable_zerocopy_send_client": false, 00:15:35.281 "zerocopy_threshold": 0, 00:15:35.281 "tls_version": 0, 00:15:35.281 "enable_ktls": false 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "sock_impl_set_options", 00:15:35.281 "params": { 00:15:35.281 "impl_name": "uring", 00:15:35.281 "recv_buf_size": 2097152, 00:15:35.281 "send_buf_size": 2097152, 00:15:35.281 "enable_recv_pipe": true, 00:15:35.281 "enable_quickack": false, 00:15:35.281 "enable_placement_id": 0, 00:15:35.281 "enable_zerocopy_send_server": false, 00:15:35.281 "enable_zerocopy_send_client": false, 00:15:35.281 "zerocopy_threshold": 0, 00:15:35.281 "tls_version": 0, 00:15:35.281 "enable_ktls": false 00:15:35.281 } 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "vmd", 00:15:35.281 "config": [] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "accel", 00:15:35.281 "config": [ 00:15:35.281 { 00:15:35.281 "method": "accel_set_options", 00:15:35.281 "params": { 00:15:35.281 "small_cache_size": 128, 00:15:35.281 "large_cache_size": 16, 00:15:35.281 "task_count": 2048, 00:15:35.281 "sequence_count": 2048, 00:15:35.281 "buf_count": 2048 00:15:35.281 } 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "bdev", 00:15:35.281 "config": [ 00:15:35.281 { 00:15:35.281 "method": "bdev_set_options", 00:15:35.281 "params": { 00:15:35.281 "bdev_io_pool_size": 65535, 00:15:35.281 "bdev_io_cache_size": 256, 00:15:35.281 "bdev_auto_examine": true, 00:15:35.281 "iobuf_small_cache_size": 128, 00:15:35.281 "iobuf_large_cache_size": 16 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_raid_set_options", 00:15:35.281 "params": { 00:15:35.281 "process_window_size_kb": 1024, 00:15:35.281 "process_max_bandwidth_mb_sec": 0 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_iscsi_set_options", 00:15:35.281 "params": { 00:15:35.281 "timeout_sec": 30 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_nvme_set_options", 00:15:35.281 "params": { 00:15:35.281 "action_on_timeout": "none", 00:15:35.281 "timeout_us": 0, 00:15:35.281 "timeout_admin_us": 0, 00:15:35.281 "keep_alive_timeout_ms": 10000, 00:15:35.281 "arbitration_burst": 0, 00:15:35.281 "low_priority_weight": 0, 00:15:35.281 "medium_priority_weight": 0, 00:15:35.281 "high_priority_weight": 0, 00:15:35.281 "nvme_adminq_poll_period_us": 10000, 00:15:35.281 "nvme_ioq_poll_period_us": 0, 00:15:35.281 "io_queue_requests": 512, 00:15:35.281 "delay_cmd_submit": true, 00:15:35.281 "transport_retry_count": 4, 00:15:35.281 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.281 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.281 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.281 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.281 "bdev_retry_count": 3, 00:15:35.281 "transport_ack_timeout": 0, 00:15:35.281 "ctrlr_loss_timeout_sec": 0, 00:15:35.281 "reconnect_delay_sec": 0, 00:15:35.281 "fast_io_fail_timeout_sec": 0, 00:15:35.281 "disable_auto_failback": false, 00:15:35.281 "generate_uuids": false, 00:15:35.281 "transport_tos": 0, 00:15:35.281 "nvme_error_stat": false, 00:15:35.281 "rdma_srq_size": 0, 00:15:35.281 "io_path_stat": false, 00:15:35.281 "allow_accel_sequence": false, 00:15:35.281 "rdma_max_cq_size": 0, 00:15:35.281 "rdma_cm_event_timeout_ms": 0, 00:15:35.281 "dhchap_digests": [ 00:15:35.281 "sha256", 00:15:35.281 "sha384", 00:15:35.281 "sha512" 00:15:35.281 ], 00:15:35.281 "dhchap_dhgroups": [ 00:15:35.281 "null", 00:15:35.281 "ffdhe2048", 00:15:35.281 "ffdhe3072", 00:15:35.281 "ffdhe4096", 00:15:35.281 "ffdhe6144", 00:15:35.281 "ffdhe8192" 00:15:35.281 ] 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_nvme_attach_controller", 00:15:35.281 "params": { 00:15:35.281 "name": "nvme0", 00:15:35.281 "trtype": "TCP", 00:15:35.281 "adrfam": "IPv4", 00:15:35.281 "traddr": "10.0.0.3", 00:15:35.281 "trsvcid": "4420", 00:15:35.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.281 "prchk_reftag": false, 00:15:35.281 "prchk_guard": false, 00:15:35.281 "ctrlr_loss_timeout_sec": 0, 00:15:35.281 "reconnect_delay_sec": 0, 00:15:35.281 "fast_io_fail_timeout_sec": 0, 00:15:35.281 "psk": "key0", 00:15:35.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:35.281 "hdgst": false, 00:15:35.281 "ddgst": false, 00:15:35.281 "multipath": "multipath" 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_nvme_set_hotplug", 00:15:35.281 "params": { 00:15:35.281 "period_us": 100000, 00:15:35.281 "enable": false 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_enable_histogram", 00:15:35.281 "params": { 00:15:35.281 "name": "nvme0n1", 00:15:35.281 "enable": true 00:15:35.281 } 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "method": "bdev_wait_for_examine" 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }, 00:15:35.281 { 00:15:35.281 "subsystem": "nbd", 00:15:35.281 "config": [] 00:15:35.281 } 00:15:35.281 ] 00:15:35.281 }' 00:15:35.281 [2024-11-20 13:34:47.144470] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:35.282 [2024-11-20 13:34:47.144752] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:15:35.541 [2024-11-20 13:34:47.289770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.541 [2024-11-20 13:34:47.353363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.541 [2024-11-20 13:34:47.495133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.799 [2024-11-20 13:34:47.549598] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.366 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.367 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:36.367 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:36.367 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:36.625 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.625 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.883 Running I/O for 1 seconds... 00:15:37.816 3733.00 IOPS, 14.58 MiB/s 00:15:37.816 Latency(us) 00:15:37.816 [2024-11-20T13:34:49.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:37.816 Verification LBA range: start 0x0 length 0x2000 00:15:37.816 nvme0n1 : 1.02 3778.96 14.76 0.00 0.00 33535.52 6881.28 35270.28 00:15:37.816 [2024-11-20T13:34:49.773Z] =================================================================================================================== 00:15:37.816 [2024-11-20T13:34:49.773Z] Total : 3778.96 14.76 0.00 0.00 33535.52 6881.28 35270.28 00:15:37.816 { 00:15:37.816 "results": [ 00:15:37.816 { 00:15:37.816 "job": "nvme0n1", 00:15:37.816 "core_mask": "0x2", 00:15:37.816 "workload": "verify", 00:15:37.816 "status": "finished", 00:15:37.817 "verify_range": { 00:15:37.817 "start": 0, 00:15:37.817 "length": 8192 00:15:37.817 }, 00:15:37.817 "queue_depth": 128, 00:15:37.817 "io_size": 4096, 00:15:37.817 "runtime": 1.02171, 00:15:37.817 "iops": 3778.958804357401, 00:15:37.817 "mibps": 14.761557829521097, 00:15:37.817 "io_failed": 0, 00:15:37.817 "io_timeout": 0, 00:15:37.817 "avg_latency_us": 33535.51851192579, 00:15:37.817 "min_latency_us": 6881.28, 00:15:37.817 "max_latency_us": 35270.28363636364 00:15:37.817 } 00:15:37.817 ], 00:15:37.817 "core_count": 1 00:15:37.817 } 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:37.817 nvmf_trace.0 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72895 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72895 ']' 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72895 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.817 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72895 00:15:38.075 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:38.075 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:38.075 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72895' 00:15:38.075 killing process with pid 72895 00:15:38.075 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72895 00:15:38.075 Received shutdown signal, test time was about 1.000000 seconds 00:15:38.075 00:15:38.075 Latency(us) 00:15:38.075 [2024-11-20T13:34:50.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.075 [2024-11-20T13:34:50.032Z] =================================================================================================================== 00:15:38.075 [2024-11-20T13:34:50.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.075 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72895 00:15:38.075 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:38.075 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.075 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.334 rmmod nvme_tcp 00:15:38.334 rmmod nvme_fabrics 00:15:38.334 rmmod nvme_keyring 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72861 ']' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72861 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72861 ']' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72861 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72861 00:15:38.334 killing process with pid 72861 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72861' 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72861 00:15:38.334 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72861 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:38.593 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.5SPbpVEIA7 /tmp/tmp.x9gO6RE1Xn /tmp/tmp.Dy3gAuYSbe 00:15:38.851 00:15:38.851 real 1m28.327s 00:15:38.851 user 2m26.273s 00:15:38.851 sys 0m27.322s 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.851 ************************************ 00:15:38.851 END TEST nvmf_tls 00:15:38.851 ************************************ 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.851 ************************************ 00:15:38.851 START TEST nvmf_fips 00:15:38.851 ************************************ 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:38.851 * Looking for test storage... 00:15:38.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:38.851 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.120 --rc genhtml_branch_coverage=1 00:15:39.120 --rc genhtml_function_coverage=1 00:15:39.120 --rc genhtml_legend=1 00:15:39.120 --rc geninfo_all_blocks=1 00:15:39.120 --rc geninfo_unexecuted_blocks=1 00:15:39.120 00:15:39.120 ' 00:15:39.120 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.120 --rc genhtml_branch_coverage=1 00:15:39.120 --rc genhtml_function_coverage=1 00:15:39.121 --rc genhtml_legend=1 00:15:39.121 --rc geninfo_all_blocks=1 00:15:39.121 --rc geninfo_unexecuted_blocks=1 00:15:39.121 00:15:39.121 ' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.121 --rc genhtml_branch_coverage=1 00:15:39.121 --rc genhtml_function_coverage=1 00:15:39.121 --rc genhtml_legend=1 00:15:39.121 --rc geninfo_all_blocks=1 00:15:39.121 --rc geninfo_unexecuted_blocks=1 00:15:39.121 00:15:39.121 ' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.121 --rc genhtml_branch_coverage=1 00:15:39.121 --rc genhtml_function_coverage=1 00:15:39.121 --rc genhtml_legend=1 00:15:39.121 --rc geninfo_all_blocks=1 00:15:39.121 --rc geninfo_unexecuted_blocks=1 00:15:39.121 00:15:39.121 ' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.121 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:39.121 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:39.122 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:39.122 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:39.380 Error setting digest 00:15:39.380 40E26D22C77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:39.380 40E26D22C77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:39.380 Cannot find device "nvmf_init_br" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:39.380 Cannot find device "nvmf_init_br2" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:39.380 Cannot find device "nvmf_tgt_br" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.380 Cannot find device "nvmf_tgt_br2" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:39.380 Cannot find device "nvmf_init_br" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:39.380 Cannot find device "nvmf_init_br2" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:39.380 Cannot find device "nvmf_tgt_br" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:39.380 Cannot find device "nvmf_tgt_br2" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:39.380 Cannot find device "nvmf_br" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:39.380 Cannot find device "nvmf_init_if" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:39.380 Cannot find device "nvmf_init_if2" 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:39.380 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:39.639 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.639 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:15:39.639 00:15:39.639 --- 10.0.0.3 ping statistics --- 00:15:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.639 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:39.639 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:39.639 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:39.639 00:15:39.639 --- 10.0.0.4 ping statistics --- 00:15:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.639 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:39.639 00:15:39.639 --- 10.0.0.1 ping statistics --- 00:15:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.639 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:39.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:39.639 00:15:39.639 --- 10.0.0.2 ping statistics --- 00:15:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.639 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:39.639 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73216 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73216 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73216 ']' 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.640 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:39.640 [2024-11-20 13:34:51.593407] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:39.640 [2024-11-20 13:34:51.593725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.898 [2024-11-20 13:34:51.748864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.898 [2024-11-20 13:34:51.819398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.898 [2024-11-20 13:34:51.819456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.898 [2024-11-20 13:34:51.819471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.898 [2024-11-20 13:34:51.819482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.898 [2024-11-20 13:34:51.819492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.898 [2024-11-20 13:34:51.819968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.155 [2024-11-20 13:34:51.880892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rI3 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rI3 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rI3 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rI3 00:15:40.722 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.287 [2024-11-20 13:34:52.956235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.287 [2024-11-20 13:34:52.972130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.287 [2024-11-20 13:34:52.972394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:41.287 malloc0 00:15:41.287 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73256 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73256 /var/tmp/bdevperf.sock 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73256 ']' 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.288 13:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:41.288 [2024-11-20 13:34:53.125004] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:41.288 [2024-11-20 13:34:53.125098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73256 ] 00:15:41.545 [2024-11-20 13:34:53.280445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.545 [2024-11-20 13:34:53.348659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.545 [2024-11-20 13:34:53.410145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.510 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.510 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:42.510 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rI3 00:15:42.510 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:43.077 [2024-11-20 13:34:54.755182] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:43.077 TLSTESTn1 00:15:43.077 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.077 Running I/O for 10 seconds... 00:15:45.456 3698.00 IOPS, 14.45 MiB/s [2024-11-20T13:34:58.348Z] 3712.00 IOPS, 14.50 MiB/s [2024-11-20T13:34:59.284Z] 3679.33 IOPS, 14.37 MiB/s [2024-11-20T13:35:00.220Z] 3779.00 IOPS, 14.76 MiB/s [2024-11-20T13:35:01.155Z] 3800.40 IOPS, 14.85 MiB/s [2024-11-20T13:35:02.091Z] 3771.83 IOPS, 14.73 MiB/s [2024-11-20T13:35:03.094Z] 3807.86 IOPS, 14.87 MiB/s [2024-11-20T13:35:04.029Z] 3825.50 IOPS, 14.94 MiB/s [2024-11-20T13:35:05.406Z] 3833.11 IOPS, 14.97 MiB/s [2024-11-20T13:35:05.406Z] 3843.00 IOPS, 15.01 MiB/s 00:15:53.449 Latency(us) 00:15:53.449 [2024-11-20T13:35:05.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.449 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:53.449 Verification LBA range: start 0x0 length 0x2000 00:15:53.449 TLSTESTn1 : 10.02 3848.82 15.03 0.00 0.00 33193.53 6196.13 34555.35 00:15:53.449 [2024-11-20T13:35:05.406Z] =================================================================================================================== 00:15:53.449 [2024-11-20T13:35:05.406Z] Total : 3848.82 15.03 0.00 0.00 33193.53 6196.13 34555.35 00:15:53.449 { 00:15:53.449 "results": [ 00:15:53.449 { 00:15:53.449 "job": "TLSTESTn1", 00:15:53.449 "core_mask": "0x4", 00:15:53.449 "workload": "verify", 00:15:53.449 "status": "finished", 00:15:53.449 "verify_range": { 00:15:53.449 "start": 0, 00:15:53.449 "length": 8192 00:15:53.449 }, 00:15:53.449 "queue_depth": 128, 00:15:53.449 "io_size": 4096, 00:15:53.449 "runtime": 10.017345, 00:15:53.449 "iops": 3848.824214400123, 00:15:53.449 "mibps": 15.03446958750048, 00:15:53.449 "io_failed": 0, 00:15:53.449 "io_timeout": 0, 00:15:53.449 "avg_latency_us": 33193.525052522375, 00:15:53.449 "min_latency_us": 6196.130909090909, 00:15:53.449 "max_latency_us": 34555.34545454545 00:15:53.449 } 00:15:53.449 ], 00:15:53.449 "core_count": 1 00:15:53.449 } 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:53.449 nvmf_trace.0 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73256 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73256 ']' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73256 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73256 00:15:53.449 killing process with pid 73256 00:15:53.449 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.449 00:15:53.449 Latency(us) 00:15:53.449 [2024-11-20T13:35:05.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.449 [2024-11-20T13:35:05.406Z] =================================================================================================================== 00:15:53.449 [2024-11-20T13:35:05.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73256' 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73256 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73256 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:53.449 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.708 rmmod nvme_tcp 00:15:53.708 rmmod nvme_fabrics 00:15:53.708 rmmod nvme_keyring 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73216 ']' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73216 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73216 ']' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73216 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73216 00:15:53.708 killing process with pid 73216 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73216' 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73216 00:15:53.708 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73216 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.967 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rI3 00:15:54.225 ************************************ 00:15:54.225 END TEST nvmf_fips 00:15:54.225 ************************************ 00:15:54.225 00:15:54.225 real 0m15.332s 00:15:54.225 user 0m21.714s 00:15:54.225 sys 0m5.739s 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.225 13:35:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.225 ************************************ 00:15:54.225 START TEST nvmf_control_msg_list 00:15:54.225 ************************************ 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:54.225 * Looking for test storage... 00:15:54.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.225 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.485 --rc genhtml_branch_coverage=1 00:15:54.485 --rc genhtml_function_coverage=1 00:15:54.485 --rc genhtml_legend=1 00:15:54.485 --rc geninfo_all_blocks=1 00:15:54.485 --rc geninfo_unexecuted_blocks=1 00:15:54.485 00:15:54.485 ' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.485 --rc genhtml_branch_coverage=1 00:15:54.485 --rc genhtml_function_coverage=1 00:15:54.485 --rc genhtml_legend=1 00:15:54.485 --rc geninfo_all_blocks=1 00:15:54.485 --rc geninfo_unexecuted_blocks=1 00:15:54.485 00:15:54.485 ' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.485 --rc genhtml_branch_coverage=1 00:15:54.485 --rc genhtml_function_coverage=1 00:15:54.485 --rc genhtml_legend=1 00:15:54.485 --rc geninfo_all_blocks=1 00:15:54.485 --rc geninfo_unexecuted_blocks=1 00:15:54.485 00:15:54.485 ' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.485 --rc genhtml_branch_coverage=1 00:15:54.485 --rc genhtml_function_coverage=1 00:15:54.485 --rc genhtml_legend=1 00:15:54.485 --rc geninfo_all_blocks=1 00:15:54.485 --rc geninfo_unexecuted_blocks=1 00:15:54.485 00:15:54.485 ' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.485 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.486 Cannot find device "nvmf_init_br" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.486 Cannot find device "nvmf_init_br2" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.486 Cannot find device "nvmf_tgt_br" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.486 Cannot find device "nvmf_tgt_br2" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.486 Cannot find device "nvmf_init_br" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.486 Cannot find device "nvmf_init_br2" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.486 Cannot find device "nvmf_tgt_br" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.486 Cannot find device "nvmf_tgt_br2" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.486 Cannot find device "nvmf_br" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.486 Cannot find device "nvmf_init_if" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.486 Cannot find device "nvmf_init_if2" 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.486 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.745 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:15:54.746 00:15:54.746 --- 10.0.0.3 ping statistics --- 00:15:54.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.746 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.746 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.746 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:54.746 00:15:54.746 --- 10.0.0.4 ping statistics --- 00:15:54.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.746 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:54.746 00:15:54.746 --- 10.0.0.1 ping statistics --- 00:15:54.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.746 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:54.746 00:15:54.746 --- 10.0.0.2 ping statistics --- 00:15:54.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.746 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73652 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73652 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73652 ']' 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.746 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.005 [2024-11-20 13:35:06.729292] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:55.005 [2024-11-20 13:35:06.729417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.005 [2024-11-20 13:35:06.885825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.005 [2024-11-20 13:35:06.956029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.005 [2024-11-20 13:35:06.956110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.005 [2024-11-20 13:35:06.956138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.005 [2024-11-20 13:35:06.956149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.005 [2024-11-20 13:35:06.956158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.005 [2024-11-20 13:35:06.956664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.264 [2024-11-20 13:35:07.018323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.199 [2024-11-20 13:35:07.862877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.199 Malloc0 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.199 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.200 [2024-11-20 13:35:07.902572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73684 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73685 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73686 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.200 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73684 00:15:56.200 [2024-11-20 13:35:08.090915] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:56.200 [2024-11-20 13:35:08.101318] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:56.200 [2024-11-20 13:35:08.101685] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:57.574 Initializing NVMe Controllers 00:15:57.574 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:57.574 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:57.574 Initialization complete. Launching workers. 00:15:57.574 ======================================================== 00:15:57.574 Latency(us) 00:15:57.574 Device Information : IOPS MiB/s Average min max 00:15:57.574 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3159.00 12.34 316.22 143.94 643.77 00:15:57.574 ======================================================== 00:15:57.574 Total : 3159.00 12.34 316.22 143.94 643.77 00:15:57.574 00:15:57.574 Initializing NVMe Controllers 00:15:57.574 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:57.574 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:57.574 Initialization complete. Launching workers. 00:15:57.574 ======================================================== 00:15:57.574 Latency(us) 00:15:57.574 Device Information : IOPS MiB/s Average min max 00:15:57.574 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3161.00 12.35 315.98 160.41 905.80 00:15:57.574 ======================================================== 00:15:57.574 Total : 3161.00 12.35 315.98 160.41 905.80 00:15:57.574 00:15:57.574 Initializing NVMe Controllers 00:15:57.574 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:57.574 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:57.574 Initialization complete. Launching workers. 00:15:57.574 ======================================================== 00:15:57.574 Latency(us) 00:15:57.574 Device Information : IOPS MiB/s Average min max 00:15:57.574 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3171.00 12.39 315.01 196.98 485.00 00:15:57.574 ======================================================== 00:15:57.574 Total : 3171.00 12.39 315.01 196.98 485.00 00:15:57.574 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73685 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73686 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.574 rmmod nvme_tcp 00:15:57.574 rmmod nvme_fabrics 00:15:57.574 rmmod nvme_keyring 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73652 ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73652 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73652 ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73652 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73652 00:15:57.574 killing process with pid 73652 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73652' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73652 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73652 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.574 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.832 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.832 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.832 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:57.833 ************************************ 00:15:57.833 END TEST nvmf_control_msg_list 00:15:57.833 ************************************ 00:15:57.833 00:15:57.833 real 0m3.702s 00:15:57.833 user 0m5.926s 00:15:57.833 sys 0m1.347s 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.833 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 ************************************ 00:15:58.092 START TEST nvmf_wait_for_buf 00:15:58.092 ************************************ 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:58.092 * Looking for test storage... 00:15:58.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.092 13:35:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:58.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:58.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.093 --rc genhtml_branch_coverage=1 00:15:58.093 --rc genhtml_function_coverage=1 00:15:58.093 --rc genhtml_legend=1 00:15:58.093 --rc geninfo_all_blocks=1 00:15:58.093 --rc geninfo_unexecuted_blocks=1 00:15:58.093 00:15:58.093 ' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.093 --rc genhtml_branch_coverage=1 00:15:58.093 --rc genhtml_function_coverage=1 00:15:58.093 --rc genhtml_legend=1 00:15:58.093 --rc geninfo_all_blocks=1 00:15:58.093 --rc geninfo_unexecuted_blocks=1 00:15:58.093 00:15:58.093 ' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.093 --rc genhtml_branch_coverage=1 00:15:58.093 --rc genhtml_function_coverage=1 00:15:58.093 --rc genhtml_legend=1 00:15:58.093 --rc geninfo_all_blocks=1 00:15:58.093 --rc geninfo_unexecuted_blocks=1 00:15:58.093 00:15:58.093 ' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.093 --rc genhtml_branch_coverage=1 00:15:58.093 --rc genhtml_function_coverage=1 00:15:58.093 --rc genhtml_legend=1 00:15:58.093 --rc geninfo_all_blocks=1 00:15:58.093 --rc geninfo_unexecuted_blocks=1 00:15:58.093 00:15:58.093 ' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.093 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:58.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.094 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:58.352 Cannot find device "nvmf_init_br" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:58.352 Cannot find device "nvmf_init_br2" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.352 Cannot find device "nvmf_tgt_br" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.352 Cannot find device "nvmf_tgt_br2" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.352 Cannot find device "nvmf_init_br" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.352 Cannot find device "nvmf_init_br2" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.352 Cannot find device "nvmf_tgt_br" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.352 Cannot find device "nvmf_tgt_br2" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.352 Cannot find device "nvmf_br" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.352 Cannot find device "nvmf_init_if" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.352 Cannot find device "nvmf_init_if2" 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.352 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.353 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:58.611 00:15:58.611 --- 10.0.0.3 ping statistics --- 00:15:58.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.611 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:58.611 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.611 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:15:58.612 00:15:58.612 --- 10.0.0.4 ping statistics --- 00:15:58.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.612 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:58.612 00:15:58.612 --- 10.0.0.1 ping statistics --- 00:15:58.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.612 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:58.612 00:15:58.612 --- 10.0.0.2 ping statistics --- 00:15:58.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.612 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73924 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73924 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73924 ']' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.612 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.612 [2024-11-20 13:35:10.517468] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:58.612 [2024-11-20 13:35:10.517610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.872 [2024-11-20 13:35:10.673919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.872 [2024-11-20 13:35:10.743851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.872 [2024-11-20 13:35:10.743942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.872 [2024-11-20 13:35:10.743968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.872 [2024-11-20 13:35:10.743978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.872 [2024-11-20 13:35:10.743988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.872 [2024-11-20 13:35:10.744534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.872 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.872 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:58.872 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.872 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.872 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 [2024-11-20 13:35:10.893646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 Malloc0 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 [2024-11-20 13:35:10.969380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.133 [2024-11-20 13:35:10.993530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.133 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.392 [2024-11-20 13:35:11.206536] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:00.768 Initializing NVMe Controllers 00:16:00.768 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:00.768 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:00.768 Initialization complete. Launching workers. 00:16:00.768 ======================================================== 00:16:00.768 Latency(us) 00:16:00.768 Device Information : IOPS MiB/s Average min max 00:16:00.768 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.95 62.49 8000.96 7764.44 8228.20 00:16:00.768 ======================================================== 00:16:00.768 Total : 499.95 62.49 8000.96 7764.44 8228.20 00:16:00.768 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.768 rmmod nvme_tcp 00:16:00.768 rmmod nvme_fabrics 00:16:00.768 rmmod nvme_keyring 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73924 ']' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73924 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73924 ']' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73924 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73924 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.768 killing process with pid 73924 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73924' 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73924 00:16:00.768 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73924 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:01.027 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:01.286 00:16:01.286 real 0m3.345s 00:16:01.286 user 0m2.657s 00:16:01.286 sys 0m0.822s 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.286 ************************************ 00:16:01.286 END TEST nvmf_wait_for_buf 00:16:01.286 ************************************ 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.286 ************************************ 00:16:01.286 START TEST nvmf_nsid 00:16:01.286 ************************************ 00:16:01.286 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:01.545 * Looking for test storage... 00:16:01.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.545 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.546 --rc genhtml_branch_coverage=1 00:16:01.546 --rc genhtml_function_coverage=1 00:16:01.546 --rc genhtml_legend=1 00:16:01.546 --rc geninfo_all_blocks=1 00:16:01.546 --rc geninfo_unexecuted_blocks=1 00:16:01.546 00:16:01.546 ' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.546 --rc genhtml_branch_coverage=1 00:16:01.546 --rc genhtml_function_coverage=1 00:16:01.546 --rc genhtml_legend=1 00:16:01.546 --rc geninfo_all_blocks=1 00:16:01.546 --rc geninfo_unexecuted_blocks=1 00:16:01.546 00:16:01.546 ' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.546 --rc genhtml_branch_coverage=1 00:16:01.546 --rc genhtml_function_coverage=1 00:16:01.546 --rc genhtml_legend=1 00:16:01.546 --rc geninfo_all_blocks=1 00:16:01.546 --rc geninfo_unexecuted_blocks=1 00:16:01.546 00:16:01.546 ' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.546 --rc genhtml_branch_coverage=1 00:16:01.546 --rc genhtml_function_coverage=1 00:16:01.546 --rc genhtml_legend=1 00:16:01.546 --rc geninfo_all_blocks=1 00:16:01.546 --rc geninfo_unexecuted_blocks=1 00:16:01.546 00:16:01.546 ' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:01.546 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:01.547 Cannot find device "nvmf_init_br" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:01.547 Cannot find device "nvmf_init_br2" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:01.547 Cannot find device "nvmf_tgt_br" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.547 Cannot find device "nvmf_tgt_br2" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:01.547 Cannot find device "nvmf_init_br" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:01.547 Cannot find device "nvmf_init_br2" 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:01.547 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:01.805 Cannot find device "nvmf_tgt_br" 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:01.805 Cannot find device "nvmf_tgt_br2" 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:01.805 Cannot find device "nvmf_br" 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:01.805 Cannot find device "nvmf_init_if" 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:01.805 Cannot find device "nvmf_init_if2" 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.805 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:02.064 00:16:02.064 --- 10.0.0.3 ping statistics --- 00:16:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.064 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:16:02.064 00:16:02.064 --- 10.0.0.4 ping statistics --- 00:16:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.064 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:02.064 00:16:02.064 --- 10.0.0.1 ping statistics --- 00:16:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.064 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:02.064 00:16:02.064 --- 10.0.0.2 ping statistics --- 00:16:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.064 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74195 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74195 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74195 ']' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.064 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:02.064 [2024-11-20 13:35:13.902747] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:02.064 [2024-11-20 13:35:13.902861] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.323 [2024-11-20 13:35:14.057755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.323 [2024-11-20 13:35:14.131721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.324 [2024-11-20 13:35:14.131807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.324 [2024-11-20 13:35:14.131821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.324 [2024-11-20 13:35:14.131831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.324 [2024-11-20 13:35:14.131841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.324 [2024-11-20 13:35:14.132389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.324 [2024-11-20 13:35:14.196375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.324 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.324 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:02.324 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:02.324 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.324 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74215 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=597e0386-eb9a-43b6-94f5-68c8b9beb5fa 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=bd5a058e-3fb9-4a25-9ea6-902fb51b3c5e 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=104bc6d3-db57-47d3-9675-3f09fa0bf60e 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:02.583 null0 00:16:02.583 null1 00:16:02.583 null2 00:16:02.583 [2024-11-20 13:35:14.375792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.583 [2024-11-20 13:35:14.394808] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:02.583 [2024-11-20 13:35:14.394918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74215 ] 00:16:02.583 [2024-11-20 13:35:14.399938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74215 /var/tmp/tgt2.sock 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74215 ']' 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.583 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:02.842 [2024-11-20 13:35:14.549487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.842 [2024-11-20 13:35:14.626692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.842 [2024-11-20 13:35:14.708595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.100 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.100 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:03.100 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:03.668 [2024-11-20 13:35:15.364144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.668 [2024-11-20 13:35:15.380339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:03.668 nvme0n1 nvme0n2 00:16:03.668 nvme1n1 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:03.668 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 597e0386-eb9a-43b6-94f5-68c8b9beb5fa 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=597e0386eb9a43b694f568c8b9beb5fa 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 597E0386EB9A43B694F568C8B9BEB5FA 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 597E0386EB9A43B694F568C8B9BEB5FA == \5\9\7\E\0\3\8\6\E\B\9\A\4\3\B\6\9\4\F\5\6\8\C\8\B\9\B\E\B\5\F\A ]] 00:16:05.043 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid bd5a058e-3fb9-4a25-9ea6-902fb51b3c5e 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bd5a058e3fb94a259ea6902fb51b3c5e 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BD5A058E3FB94A259EA6902FB51B3C5E 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ BD5A058E3FB94A259EA6902FB51B3C5E == \B\D\5\A\0\5\8\E\3\F\B\9\4\A\2\5\9\E\A\6\9\0\2\F\B\5\1\B\3\C\5\E ]] 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 104bc6d3-db57-47d3-9675-3f09fa0bf60e 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=104bc6d3db5747d396753f09fa0bf60e 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 104BC6D3DB5747D396753F09FA0BF60E 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 104BC6D3DB5747D396753F09FA0BF60E == \1\0\4\B\C\6\D\3\D\B\5\7\4\7\D\3\9\6\7\5\3\F\0\9\F\A\0\B\F\6\0\E ]] 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74215 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74215 ']' 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74215 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.044 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74215 00:16:05.303 killing process with pid 74215 00:16:05.303 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:05.303 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:05.303 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74215' 00:16:05.303 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74215 00:16:05.303 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74215 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.562 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.562 rmmod nvme_tcp 00:16:05.562 rmmod nvme_fabrics 00:16:05.821 rmmod nvme_keyring 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74195 ']' 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74195 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74195 ']' 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74195 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74195 00:16:05.821 killing process with pid 74195 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74195' 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74195 00:16:05.821 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74195 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:06.080 13:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.080 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.080 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:06.080 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.080 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.080 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:06.339 00:16:06.339 real 0m4.845s 00:16:06.339 user 0m7.158s 00:16:06.339 sys 0m1.834s 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 ************************************ 00:16:06.339 END TEST nvmf_nsid 00:16:06.339 ************************************ 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:06.339 ************************************ 00:16:06.339 END TEST nvmf_target_extra 00:16:06.339 ************************************ 00:16:06.339 00:16:06.339 real 5m25.115s 00:16:06.339 user 11m29.684s 00:16:06.339 sys 1m10.458s 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.339 13:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 13:35:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:06.339 13:35:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.339 13:35:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.339 13:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.339 ************************************ 00:16:06.339 START TEST nvmf_host 00:16:06.339 ************************************ 00:16:06.339 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:06.339 * Looking for test storage... 00:16:06.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:06.339 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.339 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.339 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.599 --rc genhtml_branch_coverage=1 00:16:06.599 --rc genhtml_function_coverage=1 00:16:06.599 --rc genhtml_legend=1 00:16:06.599 --rc geninfo_all_blocks=1 00:16:06.599 --rc geninfo_unexecuted_blocks=1 00:16:06.599 00:16:06.599 ' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.599 --rc genhtml_branch_coverage=1 00:16:06.599 --rc genhtml_function_coverage=1 00:16:06.599 --rc genhtml_legend=1 00:16:06.599 --rc geninfo_all_blocks=1 00:16:06.599 --rc geninfo_unexecuted_blocks=1 00:16:06.599 00:16:06.599 ' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.599 --rc genhtml_branch_coverage=1 00:16:06.599 --rc genhtml_function_coverage=1 00:16:06.599 --rc genhtml_legend=1 00:16:06.599 --rc geninfo_all_blocks=1 00:16:06.599 --rc geninfo_unexecuted_blocks=1 00:16:06.599 00:16:06.599 ' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.599 --rc genhtml_branch_coverage=1 00:16:06.599 --rc genhtml_function_coverage=1 00:16:06.599 --rc genhtml_legend=1 00:16:06.599 --rc geninfo_all_blocks=1 00:16:06.599 --rc geninfo_unexecuted_blocks=1 00:16:06.599 00:16:06.599 ' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.599 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.600 ************************************ 00:16:06.600 START TEST nvmf_identify 00:16:06.600 ************************************ 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:06.600 * Looking for test storage... 00:16:06.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.600 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.859 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.860 --rc genhtml_branch_coverage=1 00:16:06.860 --rc genhtml_function_coverage=1 00:16:06.860 --rc genhtml_legend=1 00:16:06.860 --rc geninfo_all_blocks=1 00:16:06.860 --rc geninfo_unexecuted_blocks=1 00:16:06.860 00:16:06.860 ' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.860 --rc genhtml_branch_coverage=1 00:16:06.860 --rc genhtml_function_coverage=1 00:16:06.860 --rc genhtml_legend=1 00:16:06.860 --rc geninfo_all_blocks=1 00:16:06.860 --rc geninfo_unexecuted_blocks=1 00:16:06.860 00:16:06.860 ' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.860 --rc genhtml_branch_coverage=1 00:16:06.860 --rc genhtml_function_coverage=1 00:16:06.860 --rc genhtml_legend=1 00:16:06.860 --rc geninfo_all_blocks=1 00:16:06.860 --rc geninfo_unexecuted_blocks=1 00:16:06.860 00:16:06.860 ' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.860 --rc genhtml_branch_coverage=1 00:16:06.860 --rc genhtml_function_coverage=1 00:16:06.860 --rc genhtml_legend=1 00:16:06.860 --rc geninfo_all_blocks=1 00:16:06.860 --rc geninfo_unexecuted_blocks=1 00:16:06.860 00:16:06.860 ' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:06.860 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:06.861 Cannot find device "nvmf_init_br" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:06.861 Cannot find device "nvmf_init_br2" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:06.861 Cannot find device "nvmf_tgt_br" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.861 Cannot find device "nvmf_tgt_br2" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:06.861 Cannot find device "nvmf_init_br" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:06.861 Cannot find device "nvmf_init_br2" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:06.861 Cannot find device "nvmf_tgt_br" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:06.861 Cannot find device "nvmf_tgt_br2" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:06.861 Cannot find device "nvmf_br" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:06.861 Cannot find device "nvmf_init_if" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:06.861 Cannot find device "nvmf_init_if2" 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.861 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:07.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:07.120 00:16:07.120 --- 10.0.0.3 ping statistics --- 00:16:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.120 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:07.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:07.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:07.120 00:16:07.120 --- 10.0.0.4 ping statistics --- 00:16:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.120 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:07.120 00:16:07.120 --- 10.0.0.1 ping statistics --- 00:16:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.120 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:07.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:07.120 00:16:07.120 --- 10.0.0.2 ping statistics --- 00:16:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.120 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.120 13:35:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74576 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74576 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74576 ']' 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.120 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.121 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.121 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 [2024-11-20 13:35:19.098642] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:07.379 [2024-11-20 13:35:19.098759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.379 [2024-11-20 13:35:19.250001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.379 [2024-11-20 13:35:19.319556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.379 [2024-11-20 13:35:19.319658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.380 [2024-11-20 13:35:19.319686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.380 [2024-11-20 13:35:19.319695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.380 [2024-11-20 13:35:19.319702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.380 [2024-11-20 13:35:19.320846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.380 [2024-11-20 13:35:19.321028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.380 [2024-11-20 13:35:19.321150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.380 [2024-11-20 13:35:19.321151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.638 [2024-11-20 13:35:19.375458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 [2024-11-20 13:35:19.455271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 Malloc0 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.638 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.639 [2024-11-20 13:35:19.567650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.639 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.639 [ 00:16:07.639 { 00:16:07.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.639 "subtype": "Discovery", 00:16:07.639 "listen_addresses": [ 00:16:07.639 { 00:16:07.639 "trtype": "TCP", 00:16:07.639 "adrfam": "IPv4", 00:16:07.639 "traddr": "10.0.0.3", 00:16:07.639 "trsvcid": "4420" 00:16:07.639 } 00:16:07.639 ], 00:16:07.639 "allow_any_host": true, 00:16:07.639 "hosts": [] 00:16:07.639 }, 00:16:07.639 { 00:16:07.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.900 "subtype": "NVMe", 00:16:07.900 "listen_addresses": [ 00:16:07.900 { 00:16:07.900 "trtype": "TCP", 00:16:07.900 "adrfam": "IPv4", 00:16:07.900 "traddr": "10.0.0.3", 00:16:07.900 "trsvcid": "4420" 00:16:07.900 } 00:16:07.900 ], 00:16:07.900 "allow_any_host": true, 00:16:07.900 "hosts": [], 00:16:07.900 "serial_number": "SPDK00000000000001", 00:16:07.900 "model_number": "SPDK bdev Controller", 00:16:07.900 "max_namespaces": 32, 00:16:07.900 "min_cntlid": 1, 00:16:07.900 "max_cntlid": 65519, 00:16:07.900 "namespaces": [ 00:16:07.900 { 00:16:07.900 "nsid": 1, 00:16:07.900 "bdev_name": "Malloc0", 00:16:07.900 "name": "Malloc0", 00:16:07.900 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:07.900 "eui64": "ABCDEF0123456789", 00:16:07.900 "uuid": "7034a5cb-e3f9-48db-ba93-cd4c23e47f95" 00:16:07.900 } 00:16:07.900 ] 00:16:07.900 } 00:16:07.900 ] 00:16:07.900 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.900 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:07.900 [2024-11-20 13:35:19.625825] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:07.900 [2024-11-20 13:35:19.625880] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74598 ] 00:16:07.900 [2024-11-20 13:35:19.786475] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:07.900 [2024-11-20 13:35:19.786549] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:07.900 [2024-11-20 13:35:19.786557] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:07.900 [2024-11-20 13:35:19.786574] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:07.900 [2024-11-20 13:35:19.786585] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:07.900 [2024-11-20 13:35:19.786937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:07.900 [2024-11-20 13:35:19.787010] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22ec750 0 00:16:07.900 [2024-11-20 13:35:19.801207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:07.900 [2024-11-20 13:35:19.801231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:07.900 [2024-11-20 13:35:19.801238] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:07.900 [2024-11-20 13:35:19.801242] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:07.900 [2024-11-20 13:35:19.801276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.801284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.801288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.900 [2024-11-20 13:35:19.801304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:07.900 [2024-11-20 13:35:19.801337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.900 [2024-11-20 13:35:19.808267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.900 [2024-11-20 13:35:19.808286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.900 [2024-11-20 13:35:19.808292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.900 [2024-11-20 13:35:19.808313] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:07.900 [2024-11-20 13:35:19.808323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:07.900 [2024-11-20 13:35:19.808329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:07.900 [2024-11-20 13:35:19.808346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.900 [2024-11-20 13:35:19.808366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.900 [2024-11-20 13:35:19.808394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.900 [2024-11-20 13:35:19.808469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.900 [2024-11-20 13:35:19.808477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.900 [2024-11-20 13:35:19.808481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.900 [2024-11-20 13:35:19.808493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:07.900 [2024-11-20 13:35:19.808501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:07.900 [2024-11-20 13:35:19.808510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.900 [2024-11-20 13:35:19.808519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.900 [2024-11-20 13:35:19.808527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.900 [2024-11-20 13:35:19.808547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.900 [2024-11-20 13:35:19.808591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.808599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.808603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.808614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:07.901 [2024-11-20 13:35:19.808623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.808631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.808647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.901 [2024-11-20 13:35:19.808665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.901 [2024-11-20 13:35:19.808707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.808715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.808719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.808730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.808740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.808757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.901 [2024-11-20 13:35:19.808775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.901 [2024-11-20 13:35:19.808820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.808828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.808832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.808841] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:07.901 [2024-11-20 13:35:19.808847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.808855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.808967] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:07.901 [2024-11-20 13:35:19.808978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.808989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.808998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.809006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.901 [2024-11-20 13:35:19.809030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.901 [2024-11-20 13:35:19.809078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.809090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.809095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.809105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:07.901 [2024-11-20 13:35:19.809116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.809133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.901 [2024-11-20 13:35:19.809152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.901 [2024-11-20 13:35:19.809214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.809226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.809230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.809240] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:07.901 [2024-11-20 13:35:19.809245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:07.901 [2024-11-20 13:35:19.809254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:07.901 [2024-11-20 13:35:19.809271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:07.901 [2024-11-20 13:35:19.809283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.809296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.901 [2024-11-20 13:35:19.809317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.901 [2024-11-20 13:35:19.809405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:07.901 [2024-11-20 13:35:19.809417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:07.901 [2024-11-20 13:35:19.809422] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ec750): datao=0, datal=4096, cccid=0 00:16:07.901 [2024-11-20 13:35:19.809431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2350740) on tqpair(0x22ec750): expected_datao=0, payload_size=4096 00:16:07.901 [2024-11-20 13:35:19.809438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809447] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809452] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.901 [2024-11-20 13:35:19.809468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.901 [2024-11-20 13:35:19.809472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.901 [2024-11-20 13:35:19.809486] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:07.901 [2024-11-20 13:35:19.809491] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:07.901 [2024-11-20 13:35:19.809496] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:07.901 [2024-11-20 13:35:19.809502] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:07.901 [2024-11-20 13:35:19.809507] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:07.901 [2024-11-20 13:35:19.809512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:07.901 [2024-11-20 13:35:19.809527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:07.901 [2024-11-20 13:35:19.809536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.901 [2024-11-20 13:35:19.809545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.901 [2024-11-20 13:35:19.809553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:07.902 [2024-11-20 13:35:19.809575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.902 [2024-11-20 13:35:19.809628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.902 [2024-11-20 13:35:19.809636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.902 [2024-11-20 13:35:19.809640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.902 [2024-11-20 13:35:19.809653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.902 [2024-11-20 13:35:19.809676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.902 [2024-11-20 13:35:19.809697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.902 [2024-11-20 13:35:19.809718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.902 [2024-11-20 13:35:19.809737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:07.902 [2024-11-20 13:35:19.809751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:07.902 [2024-11-20 13:35:19.809760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.902 [2024-11-20 13:35:19.809793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350740, cid 0, qid 0 00:16:07.902 [2024-11-20 13:35:19.809800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23508c0, cid 1, qid 0 00:16:07.902 [2024-11-20 13:35:19.809805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350a40, cid 2, qid 0 00:16:07.902 [2024-11-20 13:35:19.809810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.902 [2024-11-20 13:35:19.809815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350d40, cid 4, qid 0 00:16:07.902 [2024-11-20 13:35:19.809905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.902 [2024-11-20 13:35:19.809917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.902 [2024-11-20 13:35:19.809922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350d40) on tqpair=0x22ec750 00:16:07.902 [2024-11-20 13:35:19.809933] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:07.902 [2024-11-20 13:35:19.809938] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:07.902 [2024-11-20 13:35:19.809951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.809956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.809964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.902 [2024-11-20 13:35:19.809984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350d40, cid 4, qid 0 00:16:07.902 [2024-11-20 13:35:19.810040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:07.902 [2024-11-20 13:35:19.810048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:07.902 [2024-11-20 13:35:19.810052] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810056] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ec750): datao=0, datal=4096, cccid=4 00:16:07.902 [2024-11-20 13:35:19.810061] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2350d40) on tqpair(0x22ec750): expected_datao=0, payload_size=4096 00:16:07.902 [2024-11-20 13:35:19.810066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810073] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810078] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.902 [2024-11-20 13:35:19.810093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.902 [2024-11-20 13:35:19.810097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350d40) on tqpair=0x22ec750 00:16:07.902 [2024-11-20 13:35:19.810116] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:07.902 [2024-11-20 13:35:19.810149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.810163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.902 [2024-11-20 13:35:19.810172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22ec750) 00:16:07.902 [2024-11-20 13:35:19.810199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.902 [2024-11-20 13:35:19.810228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350d40, cid 4, qid 0 00:16:07.902 [2024-11-20 13:35:19.810240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350ec0, cid 5, qid 0 00:16:07.902 [2024-11-20 13:35:19.810354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:07.902 [2024-11-20 13:35:19.810365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:07.902 [2024-11-20 13:35:19.810369] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810373] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ec750): datao=0, datal=1024, cccid=4 00:16:07.902 [2024-11-20 13:35:19.810378] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2350d40) on tqpair(0x22ec750): expected_datao=0, payload_size=1024 00:16:07.902 [2024-11-20 13:35:19.810383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810395] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.902 [2024-11-20 13:35:19.810407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.902 [2024-11-20 13:35:19.810411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.902 [2024-11-20 13:35:19.810416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350ec0) on tqpair=0x22ec750 00:16:07.902 [2024-11-20 13:35:19.810435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.902 [2024-11-20 13:35:19.810443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.902 [2024-11-20 13:35:19.810447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350d40) on tqpair=0x22ec750 00:16:07.903 [2024-11-20 13:35:19.810465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ec750) 00:16:07.903 [2024-11-20 13:35:19.810478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.903 [2024-11-20 13:35:19.810503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350d40, cid 4, qid 0 00:16:07.903 [2024-11-20 13:35:19.810573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:07.903 [2024-11-20 13:35:19.810585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:07.903 [2024-11-20 13:35:19.810590] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810594] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ec750): datao=0, datal=3072, cccid=4 00:16:07.903 [2024-11-20 13:35:19.810599] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2350d40) on tqpair(0x22ec750): expected_datao=0, payload_size=3072 00:16:07.903 [2024-11-20 13:35:19.810603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810611] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810615] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.903 [2024-11-20 13:35:19.810631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.903 [2024-11-20 13:35:19.810635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350d40) on tqpair=0x22ec750 00:16:07.903 [2024-11-20 13:35:19.810650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ec750) 00:16:07.903 [2024-11-20 13:35:19.810663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.903 [2024-11-20 13:35:19.810687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350d40, cid 4, qid 0 00:16:07.903 [2024-11-20 13:35:19.810750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:07.903 [2024-11-20 13:35:19.810762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:07.903 [2024-11-20 13:35:19.810767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810771] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ec750): datao=0, datal=8, cccid=4 00:16:07.903 [2024-11-20 13:35:19.810776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2350d40) on tqpair(0x22ec750): expected_datao=0, payload_size=8 00:16:07.903 [2024-11-20 13:35:19.810781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810792] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.903 [2024-11-20 13:35:19.810817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.903 [2024-11-20 13:35:19.810821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.903 [2024-11-20 13:35:19.810826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350d40) on tqpair=0x22ec750 00:16:07.903 ===================================================== 00:16:07.903 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:07.903 ===================================================== 00:16:07.903 Controller Capabilities/Features 00:16:07.903 ================================ 00:16:07.903 Vendor ID: 0000 00:16:07.903 Subsystem Vendor ID: 0000 00:16:07.903 Serial Number: .................... 00:16:07.903 Model Number: ........................................ 00:16:07.903 Firmware Version: 25.01 00:16:07.903 Recommended Arb Burst: 0 00:16:07.903 IEEE OUI Identifier: 00 00 00 00:16:07.903 Multi-path I/O 00:16:07.903 May have multiple subsystem ports: No 00:16:07.903 May have multiple controllers: No 00:16:07.903 Associated with SR-IOV VF: No 00:16:07.903 Max Data Transfer Size: 131072 00:16:07.903 Max Number of Namespaces: 0 00:16:07.903 Max Number of I/O Queues: 1024 00:16:07.903 NVMe Specification Version (VS): 1.3 00:16:07.903 NVMe Specification Version (Identify): 1.3 00:16:07.903 Maximum Queue Entries: 128 00:16:07.903 Contiguous Queues Required: Yes 00:16:07.903 Arbitration Mechanisms Supported 00:16:07.903 Weighted Round Robin: Not Supported 00:16:07.903 Vendor Specific: Not Supported 00:16:07.903 Reset Timeout: 15000 ms 00:16:07.903 Doorbell Stride: 4 bytes 00:16:07.903 NVM Subsystem Reset: Not Supported 00:16:07.903 Command Sets Supported 00:16:07.903 NVM Command Set: Supported 00:16:07.903 Boot Partition: Not Supported 00:16:07.903 Memory Page Size Minimum: 4096 bytes 00:16:07.903 Memory Page Size Maximum: 4096 bytes 00:16:07.903 Persistent Memory Region: Not Supported 00:16:07.903 Optional Asynchronous Events Supported 00:16:07.903 Namespace Attribute Notices: Not Supported 00:16:07.903 Firmware Activation Notices: Not Supported 00:16:07.903 ANA Change Notices: Not Supported 00:16:07.903 PLE Aggregate Log Change Notices: Not Supported 00:16:07.903 LBA Status Info Alert Notices: Not Supported 00:16:07.903 EGE Aggregate Log Change Notices: Not Supported 00:16:07.903 Normal NVM Subsystem Shutdown event: Not Supported 00:16:07.903 Zone Descriptor Change Notices: Not Supported 00:16:07.903 Discovery Log Change Notices: Supported 00:16:07.903 Controller Attributes 00:16:07.903 128-bit Host Identifier: Not Supported 00:16:07.903 Non-Operational Permissive Mode: Not Supported 00:16:07.903 NVM Sets: Not Supported 00:16:07.903 Read Recovery Levels: Not Supported 00:16:07.903 Endurance Groups: Not Supported 00:16:07.903 Predictable Latency Mode: Not Supported 00:16:07.903 Traffic Based Keep ALive: Not Supported 00:16:07.903 Namespace Granularity: Not Supported 00:16:07.903 SQ Associations: Not Supported 00:16:07.903 UUID List: Not Supported 00:16:07.903 Multi-Domain Subsystem: Not Supported 00:16:07.903 Fixed Capacity Management: Not Supported 00:16:07.903 Variable Capacity Management: Not Supported 00:16:07.903 Delete Endurance Group: Not Supported 00:16:07.903 Delete NVM Set: Not Supported 00:16:07.903 Extended LBA Formats Supported: Not Supported 00:16:07.903 Flexible Data Placement Supported: Not Supported 00:16:07.903 00:16:07.903 Controller Memory Buffer Support 00:16:07.903 ================================ 00:16:07.903 Supported: No 00:16:07.903 00:16:07.903 Persistent Memory Region Support 00:16:07.903 ================================ 00:16:07.903 Supported: No 00:16:07.903 00:16:07.903 Admin Command Set Attributes 00:16:07.903 ============================ 00:16:07.903 Security Send/Receive: Not Supported 00:16:07.903 Format NVM: Not Supported 00:16:07.903 Firmware Activate/Download: Not Supported 00:16:07.903 Namespace Management: Not Supported 00:16:07.903 Device Self-Test: Not Supported 00:16:07.903 Directives: Not Supported 00:16:07.903 NVMe-MI: Not Supported 00:16:07.903 Virtualization Management: Not Supported 00:16:07.903 Doorbell Buffer Config: Not Supported 00:16:07.903 Get LBA Status Capability: Not Supported 00:16:07.903 Command & Feature Lockdown Capability: Not Supported 00:16:07.903 Abort Command Limit: 1 00:16:07.903 Async Event Request Limit: 4 00:16:07.903 Number of Firmware Slots: N/A 00:16:07.903 Firmware Slot 1 Read-Only: N/A 00:16:07.903 Firmware Activation Without Reset: N/A 00:16:07.904 Multiple Update Detection Support: N/A 00:16:07.904 Firmware Update Granularity: No Information Provided 00:16:07.904 Per-Namespace SMART Log: No 00:16:07.904 Asymmetric Namespace Access Log Page: Not Supported 00:16:07.904 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:07.904 Command Effects Log Page: Not Supported 00:16:07.904 Get Log Page Extended Data: Supported 00:16:07.904 Telemetry Log Pages: Not Supported 00:16:07.904 Persistent Event Log Pages: Not Supported 00:16:07.904 Supported Log Pages Log Page: May Support 00:16:07.904 Commands Supported & Effects Log Page: Not Supported 00:16:07.904 Feature Identifiers & Effects Log Page:May Support 00:16:07.904 NVMe-MI Commands & Effects Log Page: May Support 00:16:07.904 Data Area 4 for Telemetry Log: Not Supported 00:16:07.904 Error Log Page Entries Supported: 128 00:16:07.904 Keep Alive: Not Supported 00:16:07.904 00:16:07.904 NVM Command Set Attributes 00:16:07.904 ========================== 00:16:07.904 Submission Queue Entry Size 00:16:07.904 Max: 1 00:16:07.904 Min: 1 00:16:07.904 Completion Queue Entry Size 00:16:07.904 Max: 1 00:16:07.904 Min: 1 00:16:07.904 Number of Namespaces: 0 00:16:07.904 Compare Command: Not Supported 00:16:07.904 Write Uncorrectable Command: Not Supported 00:16:07.904 Dataset Management Command: Not Supported 00:16:07.904 Write Zeroes Command: Not Supported 00:16:07.904 Set Features Save Field: Not Supported 00:16:07.904 Reservations: Not Supported 00:16:07.904 Timestamp: Not Supported 00:16:07.904 Copy: Not Supported 00:16:07.904 Volatile Write Cache: Not Present 00:16:07.904 Atomic Write Unit (Normal): 1 00:16:07.904 Atomic Write Unit (PFail): 1 00:16:07.904 Atomic Compare & Write Unit: 1 00:16:07.904 Fused Compare & Write: Supported 00:16:07.904 Scatter-Gather List 00:16:07.904 SGL Command Set: Supported 00:16:07.904 SGL Keyed: Supported 00:16:07.904 SGL Bit Bucket Descriptor: Not Supported 00:16:07.904 SGL Metadata Pointer: Not Supported 00:16:07.904 Oversized SGL: Not Supported 00:16:07.904 SGL Metadata Address: Not Supported 00:16:07.904 SGL Offset: Supported 00:16:07.904 Transport SGL Data Block: Not Supported 00:16:07.904 Replay Protected Memory Block: Not Supported 00:16:07.904 00:16:07.904 Firmware Slot Information 00:16:07.904 ========================= 00:16:07.904 Active slot: 0 00:16:07.904 00:16:07.904 00:16:07.904 Error Log 00:16:07.904 ========= 00:16:07.904 00:16:07.904 Active Namespaces 00:16:07.904 ================= 00:16:07.904 Discovery Log Page 00:16:07.904 ================== 00:16:07.904 Generation Counter: 2 00:16:07.904 Number of Records: 2 00:16:07.904 Record Format: 0 00:16:07.904 00:16:07.904 Discovery Log Entry 0 00:16:07.904 ---------------------- 00:16:07.904 Transport Type: 3 (TCP) 00:16:07.904 Address Family: 1 (IPv4) 00:16:07.904 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:07.904 Entry Flags: 00:16:07.904 Duplicate Returned Information: 1 00:16:07.904 Explicit Persistent Connection Support for Discovery: 1 00:16:07.904 Transport Requirements: 00:16:07.904 Secure Channel: Not Required 00:16:07.904 Port ID: 0 (0x0000) 00:16:07.904 Controller ID: 65535 (0xffff) 00:16:07.904 Admin Max SQ Size: 128 00:16:07.904 Transport Service Identifier: 4420 00:16:07.904 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:07.904 Transport Address: 10.0.0.3 00:16:07.904 Discovery Log Entry 1 00:16:07.904 ---------------------- 00:16:07.904 Transport Type: 3 (TCP) 00:16:07.904 Address Family: 1 (IPv4) 00:16:07.904 Subsystem Type: 2 (NVM Subsystem) 00:16:07.904 Entry Flags: 00:16:07.904 Duplicate Returned Information: 0 00:16:07.904 Explicit Persistent Connection Support for Discovery: 0 00:16:07.904 Transport Requirements: 00:16:07.904 Secure Channel: Not Required 00:16:07.904 Port ID: 0 (0x0000) 00:16:07.904 Controller ID: 65535 (0xffff) 00:16:07.904 Admin Max SQ Size: 128 00:16:07.904 Transport Service Identifier: 4420 00:16:07.904 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:07.904 Transport Address: 10.0.0.3 [2024-11-20 13:35:19.810956] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:07.904 [2024-11-20 13:35:19.810975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350740) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.810983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.904 [2024-11-20 13:35:19.810989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23508c0) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.810994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.904 [2024-11-20 13:35:19.810999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350a40) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.811004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.904 [2024-11-20 13:35:19.811009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.811014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.904 [2024-11-20 13:35:19.811024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.904 [2024-11-20 13:35:19.811042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.904 [2024-11-20 13:35:19.811069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.904 [2024-11-20 13:35:19.811123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.904 [2024-11-20 13:35:19.811131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.904 [2024-11-20 13:35:19.811136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.811148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.904 [2024-11-20 13:35:19.811165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.904 [2024-11-20 13:35:19.811205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.904 [2024-11-20 13:35:19.811272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.904 [2024-11-20 13:35:19.811285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.904 [2024-11-20 13:35:19.811289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.904 [2024-11-20 13:35:19.811294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.904 [2024-11-20 13:35:19.811300] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:07.905 [2024-11-20 13:35:19.811305] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:07.905 [2024-11-20 13:35:19.811316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.811896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.811943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.811950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.811954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.811970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.811979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.811986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.812004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.812056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.812067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.812072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.812076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.812087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.812092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.812096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.812104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.812122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.812164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.812171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.905 [2024-11-20 13:35:19.812175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.812180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.905 [2024-11-20 13:35:19.816211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.816227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:07.905 [2024-11-20 13:35:19.816232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ec750) 00:16:07.905 [2024-11-20 13:35:19.816241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.905 [2024-11-20 13:35:19.816266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2350bc0, cid 3, qid 0 00:16:07.905 [2024-11-20 13:35:19.816322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:07.905 [2024-11-20 13:35:19.816330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:07.906 [2024-11-20 13:35:19.816334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:07.906 [2024-11-20 13:35:19.816339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2350bc0) on tqpair=0x22ec750 00:16:07.906 [2024-11-20 13:35:19.816348] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:16:07.906 00:16:07.906 13:35:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:08.168 [2024-11-20 13:35:19.863662] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:08.168 [2024-11-20 13:35:19.863718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74611 ] 00:16:08.168 [2024-11-20 13:35:20.025560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:08.168 [2024-11-20 13:35:20.025624] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:08.168 [2024-11-20 13:35:20.025631] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:08.168 [2024-11-20 13:35:20.025648] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:08.168 [2024-11-20 13:35:20.025660] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:08.168 [2024-11-20 13:35:20.025977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:08.168 [2024-11-20 13:35:20.026051] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ef0750 0 00:16:08.168 [2024-11-20 13:35:20.031266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:08.168 [2024-11-20 13:35:20.031301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:08.168 [2024-11-20 13:35:20.031308] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:08.168 [2024-11-20 13:35:20.031311] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:08.168 [2024-11-20 13:35:20.031345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.031353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.031358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.168 [2024-11-20 13:35:20.031371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:08.168 [2024-11-20 13:35:20.031404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.168 [2024-11-20 13:35:20.038246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.168 [2024-11-20 13:35:20.038261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.168 [2024-11-20 13:35:20.038266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.168 [2024-11-20 13:35:20.038282] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:08.168 [2024-11-20 13:35:20.038291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:08.168 [2024-11-20 13:35:20.038298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:08.168 [2024-11-20 13:35:20.038316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.168 [2024-11-20 13:35:20.038336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.168 [2024-11-20 13:35:20.038363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.168 [2024-11-20 13:35:20.038422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.168 [2024-11-20 13:35:20.038430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.168 [2024-11-20 13:35:20.038434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.168 [2024-11-20 13:35:20.038445] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:08.168 [2024-11-20 13:35:20.038454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:08.168 [2024-11-20 13:35:20.038463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.168 [2024-11-20 13:35:20.038480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.168 [2024-11-20 13:35:20.038501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.168 [2024-11-20 13:35:20.038555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.168 [2024-11-20 13:35:20.038563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.168 [2024-11-20 13:35:20.038567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.168 [2024-11-20 13:35:20.038577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:08.168 [2024-11-20 13:35:20.038587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:08.168 [2024-11-20 13:35:20.038595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.168 [2024-11-20 13:35:20.038600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.038612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.169 [2024-11-20 13:35:20.038631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.038681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.038688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.038692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.038703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:08.169 [2024-11-20 13:35:20.038714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.038731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.169 [2024-11-20 13:35:20.038755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.038797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.038804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.038808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.038819] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:08.169 [2024-11-20 13:35:20.038824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:08.169 [2024-11-20 13:35:20.038832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:08.169 [2024-11-20 13:35:20.038945] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:08.169 [2024-11-20 13:35:20.038956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:08.169 [2024-11-20 13:35:20.038968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.038977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.038985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.169 [2024-11-20 13:35:20.039007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.039051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.039058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.039062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.039072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:08.169 [2024-11-20 13:35:20.039083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.169 [2024-11-20 13:35:20.039119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.039165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.039180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.039204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.039215] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:08.169 [2024-11-20 13:35:20.039221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039231] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:08.169 [2024-11-20 13:35:20.039247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.169 [2024-11-20 13:35:20.039300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.039398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.169 [2024-11-20 13:35:20.039413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.169 [2024-11-20 13:35:20.039418] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=4096, cccid=0 00:16:08.169 [2024-11-20 13:35:20.039428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54740) on tqpair(0x1ef0750): expected_datao=0, payload_size=4096 00:16:08.169 [2024-11-20 13:35:20.039433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.039463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.039467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.039480] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:08.169 [2024-11-20 13:35:20.039486] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:08.169 [2024-11-20 13:35:20.039491] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:08.169 [2024-11-20 13:35:20.039496] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:08.169 [2024-11-20 13:35:20.039502] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:08.169 [2024-11-20 13:35:20.039507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:08.169 [2024-11-20 13:35:20.039571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.169 [2024-11-20 13:35:20.039628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.169 [2024-11-20 13:35:20.039635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.169 [2024-11-20 13:35:20.039639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.169 [2024-11-20 13:35:20.039652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.169 [2024-11-20 13:35:20.039674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.169 [2024-11-20 13:35:20.039695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.169 [2024-11-20 13:35:20.039716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.169 [2024-11-20 13:35:20.039735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:08.169 [2024-11-20 13:35:20.039758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.169 [2024-11-20 13:35:20.039763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.169 [2024-11-20 13:35:20.039770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.170 [2024-11-20 13:35:20.039792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54740, cid 0, qid 0 00:16:08.170 [2024-11-20 13:35:20.039800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f548c0, cid 1, qid 0 00:16:08.170 [2024-11-20 13:35:20.039805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54a40, cid 2, qid 0 00:16:08.170 [2024-11-20 13:35:20.039810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.170 [2024-11-20 13:35:20.039815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.039907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.039926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.039931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.039935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.170 [2024-11-20 13:35:20.039941] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:08.170 [2024-11-20 13:35:20.039947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.039957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.039969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.039977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.039982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.039986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.039994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:08.170 [2024-11-20 13:35:20.040015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.040062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.040074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.040078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.170 [2024-11-20 13:35:20.040150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.040197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.170 [2024-11-20 13:35:20.040220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.040285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.170 [2024-11-20 13:35:20.040293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.170 [2024-11-20 13:35:20.040297] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040301] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=4096, cccid=4 00:16:08.170 [2024-11-20 13:35:20.040306] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54d40) on tqpair(0x1ef0750): expected_datao=0, payload_size=4096 00:16:08.170 [2024-11-20 13:35:20.040311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040319] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040323] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.040338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.040341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.170 [2024-11-20 13:35:20.040363] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:08.170 [2024-11-20 13:35:20.040375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.040407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.170 [2024-11-20 13:35:20.040436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.040518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.170 [2024-11-20 13:35:20.040525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.170 [2024-11-20 13:35:20.040529] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040533] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=4096, cccid=4 00:16:08.170 [2024-11-20 13:35:20.040538] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54d40) on tqpair(0x1ef0750): expected_datao=0, payload_size=4096 00:16:08.170 [2024-11-20 13:35:20.040543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040551] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040555] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.040570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.040573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.170 [2024-11-20 13:35:20.040597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.040630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.170 [2024-11-20 13:35:20.040651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.040713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.170 [2024-11-20 13:35:20.040721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.170 [2024-11-20 13:35:20.040724] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=4096, cccid=4 00:16:08.170 [2024-11-20 13:35:20.040733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54d40) on tqpair(0x1ef0750): expected_datao=0, payload_size=4096 00:16:08.170 [2024-11-20 13:35:20.040739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040746] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.040765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.040769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.170 [2024-11-20 13:35:20.040783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040828] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:08.170 [2024-11-20 13:35:20.040833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:08.170 [2024-11-20 13:35:20.040839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:08.170 [2024-11-20 13:35:20.040857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.040870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.170 [2024-11-20 13:35:20.040878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.170 [2024-11-20 13:35:20.040886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef0750) 00:16:08.170 [2024-11-20 13:35:20.040892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.170 [2024-11-20 13:35:20.040919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.170 [2024-11-20 13:35:20.040927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54ec0, cid 5, qid 0 00:16:08.170 [2024-11-20 13:35:20.041016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.170 [2024-11-20 13:35:20.041029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.170 [2024-11-20 13:35:20.041033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54ec0) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54ec0, cid 5, qid 0 00:16:08.171 [2024-11-20 13:35:20.041158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54ec0) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54ec0, cid 5, qid 0 00:16:08.171 [2024-11-20 13:35:20.041290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54ec0) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54ec0, cid 5, qid 0 00:16:08.171 [2024-11-20 13:35:20.041394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54ec0) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ef0750) 00:16:08.171 [2024-11-20 13:35:20.041506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.171 [2024-11-20 13:35:20.041528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54ec0, cid 5, qid 0 00:16:08.171 [2024-11-20 13:35:20.041535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54d40, cid 4, qid 0 00:16:08.171 [2024-11-20 13:35:20.041540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f55040, cid 6, qid 0 00:16:08.171 [2024-11-20 13:35:20.041545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f551c0, cid 7, qid 0 00:16:08.171 [2024-11-20 13:35:20.041685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.171 [2024-11-20 13:35:20.041697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.171 [2024-11-20 13:35:20.041701] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041705] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=8192, cccid=5 00:16:08.171 [2024-11-20 13:35:20.041710] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54ec0) on tqpair(0x1ef0750): expected_datao=0, payload_size=8192 00:16:08.171 [2024-11-20 13:35:20.041715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041734] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041739] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.171 [2024-11-20 13:35:20.041751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.171 [2024-11-20 13:35:20.041755] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041759] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=512, cccid=4 00:16:08.171 [2024-11-20 13:35:20.041764] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f54d40) on tqpair(0x1ef0750): expected_datao=0, payload_size=512 00:16:08.171 [2024-11-20 13:35:20.041768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041775] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041779] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.171 [2024-11-20 13:35:20.041790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.171 [2024-11-20 13:35:20.041794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041798] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=512, cccid=6 00:16:08.171 [2024-11-20 13:35:20.041803] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f55040) on tqpair(0x1ef0750): expected_datao=0, payload_size=512 00:16:08.171 [2024-11-20 13:35:20.041807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:08.171 [2024-11-20 13:35:20.041830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:08.171 [2024-11-20 13:35:20.041834] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef0750): datao=0, datal=4096, cccid=7 00:16:08.171 [2024-11-20 13:35:20.041843] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f551c0) on tqpair(0x1ef0750): expected_datao=0, payload_size=4096 00:16:08.171 [2024-11-20 13:35:20.041847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041854] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041858] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54ec0) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54d40) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f55040) on tqpair=0x1ef0750 00:16:08.171 [2024-11-20 13:35:20.041945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.171 [2024-11-20 13:35:20.041951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.171 [2024-11-20 13:35:20.041955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.171 [2024-11-20 13:35:20.041959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f551c0) on tqpair=0x1ef0750 00:16:08.171 ===================================================== 00:16:08.171 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:08.171 ===================================================== 00:16:08.171 Controller Capabilities/Features 00:16:08.171 ================================ 00:16:08.171 Vendor ID: 8086 00:16:08.171 Subsystem Vendor ID: 8086 00:16:08.171 Serial Number: SPDK00000000000001 00:16:08.171 Model Number: SPDK bdev Controller 00:16:08.171 Firmware Version: 25.01 00:16:08.171 Recommended Arb Burst: 6 00:16:08.171 IEEE OUI Identifier: e4 d2 5c 00:16:08.171 Multi-path I/O 00:16:08.171 May have multiple subsystem ports: Yes 00:16:08.171 May have multiple controllers: Yes 00:16:08.171 Associated with SR-IOV VF: No 00:16:08.171 Max Data Transfer Size: 131072 00:16:08.171 Max Number of Namespaces: 32 00:16:08.171 Max Number of I/O Queues: 127 00:16:08.171 NVMe Specification Version (VS): 1.3 00:16:08.171 NVMe Specification Version (Identify): 1.3 00:16:08.171 Maximum Queue Entries: 128 00:16:08.171 Contiguous Queues Required: Yes 00:16:08.172 Arbitration Mechanisms Supported 00:16:08.172 Weighted Round Robin: Not Supported 00:16:08.172 Vendor Specific: Not Supported 00:16:08.172 Reset Timeout: 15000 ms 00:16:08.172 Doorbell Stride: 4 bytes 00:16:08.172 NVM Subsystem Reset: Not Supported 00:16:08.172 Command Sets Supported 00:16:08.172 NVM Command Set: Supported 00:16:08.172 Boot Partition: Not Supported 00:16:08.172 Memory Page Size Minimum: 4096 bytes 00:16:08.172 Memory Page Size Maximum: 4096 bytes 00:16:08.172 Persistent Memory Region: Not Supported 00:16:08.172 Optional Asynchronous Events Supported 00:16:08.172 Namespace Attribute Notices: Supported 00:16:08.172 Firmware Activation Notices: Not Supported 00:16:08.172 ANA Change Notices: Not Supported 00:16:08.172 PLE Aggregate Log Change Notices: Not Supported 00:16:08.172 LBA Status Info Alert Notices: Not Supported 00:16:08.172 EGE Aggregate Log Change Notices: Not Supported 00:16:08.172 Normal NVM Subsystem Shutdown event: Not Supported 00:16:08.172 Zone Descriptor Change Notices: Not Supported 00:16:08.172 Discovery Log Change Notices: Not Supported 00:16:08.172 Controller Attributes 00:16:08.172 128-bit Host Identifier: Supported 00:16:08.172 Non-Operational Permissive Mode: Not Supported 00:16:08.172 NVM Sets: Not Supported 00:16:08.172 Read Recovery Levels: Not Supported 00:16:08.172 Endurance Groups: Not Supported 00:16:08.172 Predictable Latency Mode: Not Supported 00:16:08.172 Traffic Based Keep ALive: Not Supported 00:16:08.172 Namespace Granularity: Not Supported 00:16:08.172 SQ Associations: Not Supported 00:16:08.172 UUID List: Not Supported 00:16:08.172 Multi-Domain Subsystem: Not Supported 00:16:08.172 Fixed Capacity Management: Not Supported 00:16:08.172 Variable Capacity Management: Not Supported 00:16:08.172 Delete Endurance Group: Not Supported 00:16:08.172 Delete NVM Set: Not Supported 00:16:08.172 Extended LBA Formats Supported: Not Supported 00:16:08.172 Flexible Data Placement Supported: Not Supported 00:16:08.172 00:16:08.172 Controller Memory Buffer Support 00:16:08.172 ================================ 00:16:08.172 Supported: No 00:16:08.172 00:16:08.172 Persistent Memory Region Support 00:16:08.172 ================================ 00:16:08.172 Supported: No 00:16:08.172 00:16:08.172 Admin Command Set Attributes 00:16:08.172 ============================ 00:16:08.172 Security Send/Receive: Not Supported 00:16:08.172 Format NVM: Not Supported 00:16:08.172 Firmware Activate/Download: Not Supported 00:16:08.172 Namespace Management: Not Supported 00:16:08.172 Device Self-Test: Not Supported 00:16:08.172 Directives: Not Supported 00:16:08.172 NVMe-MI: Not Supported 00:16:08.172 Virtualization Management: Not Supported 00:16:08.172 Doorbell Buffer Config: Not Supported 00:16:08.172 Get LBA Status Capability: Not Supported 00:16:08.172 Command & Feature Lockdown Capability: Not Supported 00:16:08.172 Abort Command Limit: 4 00:16:08.172 Async Event Request Limit: 4 00:16:08.172 Number of Firmware Slots: N/A 00:16:08.172 Firmware Slot 1 Read-Only: N/A 00:16:08.172 Firmware Activation Without Reset: N/A 00:16:08.172 Multiple Update Detection Support: N/A 00:16:08.172 Firmware Update Granularity: No Information Provided 00:16:08.172 Per-Namespace SMART Log: No 00:16:08.172 Asymmetric Namespace Access Log Page: Not Supported 00:16:08.172 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:08.172 Command Effects Log Page: Supported 00:16:08.172 Get Log Page Extended Data: Supported 00:16:08.172 Telemetry Log Pages: Not Supported 00:16:08.172 Persistent Event Log Pages: Not Supported 00:16:08.172 Supported Log Pages Log Page: May Support 00:16:08.172 Commands Supported & Effects Log Page: Not Supported 00:16:08.172 Feature Identifiers & Effects Log Page:May Support 00:16:08.172 NVMe-MI Commands & Effects Log Page: May Support 00:16:08.172 Data Area 4 for Telemetry Log: Not Supported 00:16:08.172 Error Log Page Entries Supported: 128 00:16:08.172 Keep Alive: Supported 00:16:08.172 Keep Alive Granularity: 10000 ms 00:16:08.172 00:16:08.172 NVM Command Set Attributes 00:16:08.172 ========================== 00:16:08.172 Submission Queue Entry Size 00:16:08.172 Max: 64 00:16:08.172 Min: 64 00:16:08.172 Completion Queue Entry Size 00:16:08.172 Max: 16 00:16:08.172 Min: 16 00:16:08.172 Number of Namespaces: 32 00:16:08.172 Compare Command: Supported 00:16:08.172 Write Uncorrectable Command: Not Supported 00:16:08.172 Dataset Management Command: Supported 00:16:08.172 Write Zeroes Command: Supported 00:16:08.172 Set Features Save Field: Not Supported 00:16:08.172 Reservations: Supported 00:16:08.172 Timestamp: Not Supported 00:16:08.172 Copy: Supported 00:16:08.172 Volatile Write Cache: Present 00:16:08.172 Atomic Write Unit (Normal): 1 00:16:08.172 Atomic Write Unit (PFail): 1 00:16:08.172 Atomic Compare & Write Unit: 1 00:16:08.172 Fused Compare & Write: Supported 00:16:08.172 Scatter-Gather List 00:16:08.172 SGL Command Set: Supported 00:16:08.172 SGL Keyed: Supported 00:16:08.172 SGL Bit Bucket Descriptor: Not Supported 00:16:08.172 SGL Metadata Pointer: Not Supported 00:16:08.172 Oversized SGL: Not Supported 00:16:08.172 SGL Metadata Address: Not Supported 00:16:08.172 SGL Offset: Supported 00:16:08.172 Transport SGL Data Block: Not Supported 00:16:08.172 Replay Protected Memory Block: Not Supported 00:16:08.172 00:16:08.172 Firmware Slot Information 00:16:08.172 ========================= 00:16:08.172 Active slot: 1 00:16:08.172 Slot 1 Firmware Revision: 25.01 00:16:08.172 00:16:08.172 00:16:08.172 Commands Supported and Effects 00:16:08.172 ============================== 00:16:08.172 Admin Commands 00:16:08.172 -------------- 00:16:08.172 Get Log Page (02h): Supported 00:16:08.172 Identify (06h): Supported 00:16:08.172 Abort (08h): Supported 00:16:08.172 Set Features (09h): Supported 00:16:08.172 Get Features (0Ah): Supported 00:16:08.172 Asynchronous Event Request (0Ch): Supported 00:16:08.172 Keep Alive (18h): Supported 00:16:08.172 I/O Commands 00:16:08.172 ------------ 00:16:08.172 Flush (00h): Supported LBA-Change 00:16:08.172 Write (01h): Supported LBA-Change 00:16:08.172 Read (02h): Supported 00:16:08.172 Compare (05h): Supported 00:16:08.172 Write Zeroes (08h): Supported LBA-Change 00:16:08.172 Dataset Management (09h): Supported LBA-Change 00:16:08.172 Copy (19h): Supported LBA-Change 00:16:08.172 00:16:08.172 Error Log 00:16:08.172 ========= 00:16:08.172 00:16:08.172 Arbitration 00:16:08.172 =========== 00:16:08.172 Arbitration Burst: 1 00:16:08.172 00:16:08.172 Power Management 00:16:08.172 ================ 00:16:08.172 Number of Power States: 1 00:16:08.172 Current Power State: Power State #0 00:16:08.172 Power State #0: 00:16:08.172 Max Power: 0.00 W 00:16:08.172 Non-Operational State: Operational 00:16:08.172 Entry Latency: Not Reported 00:16:08.172 Exit Latency: Not Reported 00:16:08.172 Relative Read Throughput: 0 00:16:08.172 Relative Read Latency: 0 00:16:08.172 Relative Write Throughput: 0 00:16:08.172 Relative Write Latency: 0 00:16:08.172 Idle Power: Not Reported 00:16:08.172 Active Power: Not Reported 00:16:08.172 Non-Operational Permissive Mode: Not Supported 00:16:08.172 00:16:08.172 Health Information 00:16:08.172 ================== 00:16:08.172 Critical Warnings: 00:16:08.172 Available Spare Space: OK 00:16:08.172 Temperature: OK 00:16:08.172 Device Reliability: OK 00:16:08.172 Read Only: No 00:16:08.172 Volatile Memory Backup: OK 00:16:08.172 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:08.172 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:08.172 Available Spare: 0% 00:16:08.172 Available Spare Threshold: 0% 00:16:08.172 Life Percentage Used:[2024-11-20 13:35:20.042068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.172 [2024-11-20 13:35:20.042075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ef0750) 00:16:08.172 [2024-11-20 13:35:20.042083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.172 [2024-11-20 13:35:20.042108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f551c0, cid 7, qid 0 00:16:08.172 [2024-11-20 13:35:20.042156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.172 [2024-11-20 13:35:20.042163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.172 [2024-11-20 13:35:20.042167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.172 [2024-11-20 13:35:20.042172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f551c0) on tqpair=0x1ef0750 00:16:08.172 [2024-11-20 13:35:20.046227] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:08.172 [2024-11-20 13:35:20.046256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54740) on tqpair=0x1ef0750 00:16:08.172 [2024-11-20 13:35:20.046266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.172 [2024-11-20 13:35:20.046272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f548c0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.173 [2024-11-20 13:35:20.046282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54a40) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.173 [2024-11-20 13:35:20.046293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.173 [2024-11-20 13:35:20.046309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.046412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.046416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.046547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.046551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046561] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:08.173 [2024-11-20 13:35:20.046567] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:08.173 [2024-11-20 13:35:20.046578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.046670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.046674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.046776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.046780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.046894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.046899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.046915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.046925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.046933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.046953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.046996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.047008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.047012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.047028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.047045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.047065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.047114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.047121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.047125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.047140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.173 [2024-11-20 13:35:20.047157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.173 [2024-11-20 13:35:20.047176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.173 [2024-11-20 13:35:20.047240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.173 [2024-11-20 13:35:20.047250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.173 [2024-11-20 13:35:20.047253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.173 [2024-11-20 13:35:20.047269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.173 [2024-11-20 13:35:20.047278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.047909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.047916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.047920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.047935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.047944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.047951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.047970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.048042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.048058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.048077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.048146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.048163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.048182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.048270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.048287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.048307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.048377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.048394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.048413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.174 [2024-11-20 13:35:20.048488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.174 [2024-11-20 13:35:20.048505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.174 [2024-11-20 13:35:20.048524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.174 [2024-11-20 13:35:20.048566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.174 [2024-11-20 13:35:20.048573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.174 [2024-11-20 13:35:20.048577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.174 [2024-11-20 13:35:20.048581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.048592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.048609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.048628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.048671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.048683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.048687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.048703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.048720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.048740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.048789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.048796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.048800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.048815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.048831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.048850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.048899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.048906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.048910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.048925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.048934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.048942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.048971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.175 [2024-11-20 13:35:20.049848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.175 [2024-11-20 13:35:20.049857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.175 [2024-11-20 13:35:20.049865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.175 [2024-11-20 13:35:20.049883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.175 [2024-11-20 13:35:20.049927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.175 [2024-11-20 13:35:20.049934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.175 [2024-11-20 13:35:20.049938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.049942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.176 [2024-11-20 13:35:20.049953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.049958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.049962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.176 [2024-11-20 13:35:20.049970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.176 [2024-11-20 13:35:20.049989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.176 [2024-11-20 13:35:20.050031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.176 [2024-11-20 13:35:20.050039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.176 [2024-11-20 13:35:20.050042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.176 [2024-11-20 13:35:20.050057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.176 [2024-11-20 13:35:20.050074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.176 [2024-11-20 13:35:20.050093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.176 [2024-11-20 13:35:20.050138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.176 [2024-11-20 13:35:20.050146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.176 [2024-11-20 13:35:20.050150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.176 [2024-11-20 13:35:20.050165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.050174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef0750) 00:16:08.176 [2024-11-20 13:35:20.050182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.176 [2024-11-20 13:35:20.054262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f54bc0, cid 3, qid 0 00:16:08.176 [2024-11-20 13:35:20.054355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:08.176 [2024-11-20 13:35:20.054364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:08.176 [2024-11-20 13:35:20.054368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:08.176 [2024-11-20 13:35:20.054372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f54bc0) on tqpair=0x1ef0750 00:16:08.176 [2024-11-20 13:35:20.054382] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:08.176 0% 00:16:08.176 Data Units Read: 0 00:16:08.176 Data Units Written: 0 00:16:08.176 Host Read Commands: 0 00:16:08.176 Host Write Commands: 0 00:16:08.176 Controller Busy Time: 0 minutes 00:16:08.176 Power Cycles: 0 00:16:08.176 Power On Hours: 0 hours 00:16:08.176 Unsafe Shutdowns: 0 00:16:08.176 Unrecoverable Media Errors: 0 00:16:08.176 Lifetime Error Log Entries: 0 00:16:08.176 Warning Temperature Time: 0 minutes 00:16:08.176 Critical Temperature Time: 0 minutes 00:16:08.176 00:16:08.176 Number of Queues 00:16:08.176 ================ 00:16:08.176 Number of I/O Submission Queues: 127 00:16:08.176 Number of I/O Completion Queues: 127 00:16:08.176 00:16:08.176 Active Namespaces 00:16:08.176 ================= 00:16:08.176 Namespace ID:1 00:16:08.176 Error Recovery Timeout: Unlimited 00:16:08.176 Command Set Identifier: NVM (00h) 00:16:08.176 Deallocate: Supported 00:16:08.176 Deallocated/Unwritten Error: Not Supported 00:16:08.176 Deallocated Read Value: Unknown 00:16:08.176 Deallocate in Write Zeroes: Not Supported 00:16:08.176 Deallocated Guard Field: 0xFFFF 00:16:08.176 Flush: Supported 00:16:08.176 Reservation: Supported 00:16:08.176 Namespace Sharing Capabilities: Multiple Controllers 00:16:08.176 Size (in LBAs): 131072 (0GiB) 00:16:08.176 Capacity (in LBAs): 131072 (0GiB) 00:16:08.176 Utilization (in LBAs): 131072 (0GiB) 00:16:08.176 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:08.176 EUI64: ABCDEF0123456789 00:16:08.176 UUID: 7034a5cb-e3f9-48db-ba93-cd4c23e47f95 00:16:08.176 Thin Provisioning: Not Supported 00:16:08.176 Per-NS Atomic Units: Yes 00:16:08.176 Atomic Boundary Size (Normal): 0 00:16:08.176 Atomic Boundary Size (PFail): 0 00:16:08.176 Atomic Boundary Offset: 0 00:16:08.176 Maximum Single Source Range Length: 65535 00:16:08.176 Maximum Copy Length: 65535 00:16:08.176 Maximum Source Range Count: 1 00:16:08.176 NGUID/EUI64 Never Reused: No 00:16:08.176 Namespace Write Protected: No 00:16:08.176 Number of LBA Formats: 1 00:16:08.176 Current LBA Format: LBA Format #00 00:16:08.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:08.176 00:16:08.176 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.433 rmmod nvme_tcp 00:16:08.433 rmmod nvme_fabrics 00:16:08.433 rmmod nvme_keyring 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74576 ']' 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74576 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74576 ']' 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74576 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74576 00:16:08.433 killing process with pid 74576 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.433 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.434 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74576' 00:16:08.434 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74576 00:16:08.434 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74576 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:08.691 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:08.949 00:16:08.949 real 0m2.331s 00:16:08.949 user 0m4.808s 00:16:08.949 sys 0m0.732s 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:08.949 ************************************ 00:16:08.949 END TEST nvmf_identify 00:16:08.949 ************************************ 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.949 ************************************ 00:16:08.949 START TEST nvmf_perf 00:16:08.949 ************************************ 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:08.949 * Looking for test storage... 00:16:08.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.949 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.208 --rc genhtml_branch_coverage=1 00:16:09.208 --rc genhtml_function_coverage=1 00:16:09.208 --rc genhtml_legend=1 00:16:09.208 --rc geninfo_all_blocks=1 00:16:09.208 --rc geninfo_unexecuted_blocks=1 00:16:09.208 00:16:09.208 ' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.208 --rc genhtml_branch_coverage=1 00:16:09.208 --rc genhtml_function_coverage=1 00:16:09.208 --rc genhtml_legend=1 00:16:09.208 --rc geninfo_all_blocks=1 00:16:09.208 --rc geninfo_unexecuted_blocks=1 00:16:09.208 00:16:09.208 ' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.208 --rc genhtml_branch_coverage=1 00:16:09.208 --rc genhtml_function_coverage=1 00:16:09.208 --rc genhtml_legend=1 00:16:09.208 --rc geninfo_all_blocks=1 00:16:09.208 --rc geninfo_unexecuted_blocks=1 00:16:09.208 00:16:09.208 ' 00:16:09.208 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.208 --rc genhtml_branch_coverage=1 00:16:09.208 --rc genhtml_function_coverage=1 00:16:09.208 --rc genhtml_legend=1 00:16:09.209 --rc geninfo_all_blocks=1 00:16:09.209 --rc geninfo_unexecuted_blocks=1 00:16:09.209 00:16:09.209 ' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.209 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.209 13:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:09.209 Cannot find device "nvmf_init_br" 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:09.209 Cannot find device "nvmf_init_br2" 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:09.209 Cannot find device "nvmf_tgt_br" 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.209 Cannot find device "nvmf_tgt_br2" 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:09.209 Cannot find device "nvmf_init_br" 00:16:09.209 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:09.210 Cannot find device "nvmf_init_br2" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:09.210 Cannot find device "nvmf_tgt_br" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:09.210 Cannot find device "nvmf_tgt_br2" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:09.210 Cannot find device "nvmf_br" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:09.210 Cannot find device "nvmf_init_if" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:09.210 Cannot find device "nvmf_init_if2" 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.210 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.469 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:09.470 00:16:09.470 --- 10.0.0.3 ping statistics --- 00:16:09.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.470 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.470 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.470 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:09.470 00:16:09.470 --- 10.0.0.4 ping statistics --- 00:16:09.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.470 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:09.470 00:16:09.470 --- 10.0.0.1 ping statistics --- 00:16:09.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.470 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:16:09.470 00:16:09.470 --- 10.0.0.2 ping statistics --- 00:16:09.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.470 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74824 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74824 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74824 ']' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.470 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:09.729 [2024-11-20 13:35:21.475055] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:09.729 [2024-11-20 13:35:21.475168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.729 [2024-11-20 13:35:21.626580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.987 [2024-11-20 13:35:21.698963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.987 [2024-11-20 13:35:21.699032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.987 [2024-11-20 13:35:21.699047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.987 [2024-11-20 13:35:21.699058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.987 [2024-11-20 13:35:21.699067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.987 [2024-11-20 13:35:21.700291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.987 [2024-11-20 13:35:21.700424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.987 [2024-11-20 13:35:21.700564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.987 [2024-11-20 13:35:21.700568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.987 [2024-11-20 13:35:21.759266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.987 13:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:10.553 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:10.553 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:10.812 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:10.812 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.071 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:11.071 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:11.071 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:11.071 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:11.071 13:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:11.330 [2024-11-20 13:35:23.169443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.330 13:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:11.588 13:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:11.588 13:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.847 13:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:11.847 13:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:12.106 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:12.365 [2024-11-20 13:35:24.239452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:12.365 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:12.623 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:12.623 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:12.623 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:12.623 13:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:14.002 Initializing NVMe Controllers 00:16:14.002 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:14.002 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:14.002 Initialization complete. Launching workers. 00:16:14.002 ======================================================== 00:16:14.002 Latency(us) 00:16:14.002 Device Information : IOPS MiB/s Average min max 00:16:14.002 PCIE (0000:00:10.0) NSID 1 from core 0: 23840.00 93.12 1341.92 286.92 8932.57 00:16:14.002 ======================================================== 00:16:14.002 Total : 23840.00 93.12 1341.92 286.92 8932.57 00:16:14.002 00:16:14.002 13:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:14.941 Initializing NVMe Controllers 00:16:14.941 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.941 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:14.941 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:14.941 Initialization complete. Launching workers. 00:16:14.941 ======================================================== 00:16:14.941 Latency(us) 00:16:14.941 Device Information : IOPS MiB/s Average min max 00:16:14.941 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3635.95 14.20 274.69 107.88 4308.47 00:16:14.941 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.23 7955.85 12008.55 00:16:14.941 ======================================================== 00:16:14.941 Total : 3759.94 14.69 533.69 107.88 12008.55 00:16:14.941 00:16:15.199 13:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:16.573 Initializing NVMe Controllers 00:16:16.573 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.573 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:16.573 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:16.573 Initialization complete. Launching workers. 00:16:16.573 ======================================================== 00:16:16.573 Latency(us) 00:16:16.573 Device Information : IOPS MiB/s Average min max 00:16:16.573 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8593.98 33.57 3724.54 580.91 10829.21 00:16:16.573 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3935.99 15.37 8177.10 6776.18 16994.53 00:16:16.573 ======================================================== 00:16:16.573 Total : 12529.97 48.95 5123.20 580.91 16994.53 00:16:16.573 00:16:16.573 13:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:16.573 13:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:19.102 Initializing NVMe Controllers 00:16:19.102 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.102 Controller IO queue size 128, less than required. 00:16:19.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.102 Controller IO queue size 128, less than required. 00:16:19.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.102 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:19.102 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:19.102 Initialization complete. Launching workers. 00:16:19.102 ======================================================== 00:16:19.102 Latency(us) 00:16:19.102 Device Information : IOPS MiB/s Average min max 00:16:19.102 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1628.64 407.16 79816.69 39935.93 125688.95 00:16:19.102 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.77 159.69 207810.31 72163.20 337706.90 00:16:19.102 ======================================================== 00:16:19.102 Total : 2267.41 566.85 115874.80 39935.93 337706.90 00:16:19.102 00:16:19.102 13:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:19.360 Initializing NVMe Controllers 00:16:19.360 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.360 Controller IO queue size 128, less than required. 00:16:19.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.360 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:19.360 Controller IO queue size 128, less than required. 00:16:19.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.360 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:19.360 WARNING: Some requested NVMe devices were skipped 00:16:19.360 No valid NVMe controllers or AIO or URING devices found 00:16:19.360 13:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:21.891 Initializing NVMe Controllers 00:16:21.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.891 Controller IO queue size 128, less than required. 00:16:21.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:21.891 Controller IO queue size 128, less than required. 00:16:21.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:21.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:21.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:21.891 Initialization complete. Launching workers. 00:16:21.891 00:16:21.891 ==================== 00:16:21.891 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:21.891 TCP transport: 00:16:21.891 polls: 8560 00:16:21.891 idle_polls: 5126 00:16:21.891 sock_completions: 3434 00:16:21.891 nvme_completions: 6225 00:16:21.891 submitted_requests: 9320 00:16:21.891 queued_requests: 1 00:16:21.891 00:16:21.891 ==================== 00:16:21.891 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:21.891 TCP transport: 00:16:21.891 polls: 8839 00:16:21.891 idle_polls: 4682 00:16:21.891 sock_completions: 4157 00:16:21.891 nvme_completions: 6727 00:16:21.891 submitted_requests: 10188 00:16:21.891 queued_requests: 1 00:16:21.891 ======================================================== 00:16:21.891 Latency(us) 00:16:21.891 Device Information : IOPS MiB/s Average min max 00:16:21.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1554.28 388.57 84077.79 43502.22 137026.07 00:16:21.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1679.64 419.91 76145.91 36100.06 138439.70 00:16:21.891 ======================================================== 00:16:21.891 Total : 3233.93 808.48 79958.11 36100.06 138439.70 00:16:21.891 00:16:22.149 13:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:22.149 13:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.407 rmmod nvme_tcp 00:16:22.407 rmmod nvme_fabrics 00:16:22.407 rmmod nvme_keyring 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74824 ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74824 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74824 ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74824 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74824 00:16:22.407 killing process with pid 74824 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74824' 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74824 00:16:22.407 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74824 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:23.341 13:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:23.341 00:16:23.341 real 0m14.435s 00:16:23.341 user 0m51.828s 00:16:23.341 sys 0m4.052s 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:23.341 ************************************ 00:16:23.341 END TEST nvmf_perf 00:16:23.341 ************************************ 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.341 ************************************ 00:16:23.341 START TEST nvmf_fio_host 00:16:23.341 ************************************ 00:16:23.341 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:23.600 * Looking for test storage... 00:16:23.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.600 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.601 --rc genhtml_branch_coverage=1 00:16:23.601 --rc genhtml_function_coverage=1 00:16:23.601 --rc genhtml_legend=1 00:16:23.601 --rc geninfo_all_blocks=1 00:16:23.601 --rc geninfo_unexecuted_blocks=1 00:16:23.601 00:16:23.601 ' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.601 --rc genhtml_branch_coverage=1 00:16:23.601 --rc genhtml_function_coverage=1 00:16:23.601 --rc genhtml_legend=1 00:16:23.601 --rc geninfo_all_blocks=1 00:16:23.601 --rc geninfo_unexecuted_blocks=1 00:16:23.601 00:16:23.601 ' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.601 --rc genhtml_branch_coverage=1 00:16:23.601 --rc genhtml_function_coverage=1 00:16:23.601 --rc genhtml_legend=1 00:16:23.601 --rc geninfo_all_blocks=1 00:16:23.601 --rc geninfo_unexecuted_blocks=1 00:16:23.601 00:16:23.601 ' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.601 --rc genhtml_branch_coverage=1 00:16:23.601 --rc genhtml_function_coverage=1 00:16:23.601 --rc genhtml_legend=1 00:16:23.601 --rc geninfo_all_blocks=1 00:16:23.601 --rc geninfo_unexecuted_blocks=1 00:16:23.601 00:16:23.601 ' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.601 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.602 Cannot find device "nvmf_init_br" 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.602 Cannot find device "nvmf_init_br2" 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.602 Cannot find device "nvmf_tgt_br" 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.602 Cannot find device "nvmf_tgt_br2" 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.602 Cannot find device "nvmf_init_br" 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:23.602 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.860 Cannot find device "nvmf_init_br2" 00:16:23.860 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:23.860 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.860 Cannot find device "nvmf_tgt_br" 00:16:23.860 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.861 Cannot find device "nvmf_tgt_br2" 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.861 Cannot find device "nvmf_br" 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.861 Cannot find device "nvmf_init_if" 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.861 Cannot find device "nvmf_init_if2" 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.861 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:24.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:24.120 00:16:24.120 --- 10.0.0.3 ping statistics --- 00:16:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.120 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:24.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:24.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:24.120 00:16:24.120 --- 10.0.0.4 ping statistics --- 00:16:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.120 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:24.120 00:16:24.120 --- 10.0.0.1 ping statistics --- 00:16:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.120 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:24.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:24.120 00:16:24.120 --- 10.0.0.2 ping statistics --- 00:16:24.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.120 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75277 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75277 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75277 ']' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.120 13:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.120 [2024-11-20 13:35:35.960275] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:24.120 [2024-11-20 13:35:35.960393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.378 [2024-11-20 13:35:36.120723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.378 [2024-11-20 13:35:36.212996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.379 [2024-11-20 13:35:36.213454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.379 [2024-11-20 13:35:36.213753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.379 [2024-11-20 13:35:36.214080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.379 [2024-11-20 13:35:36.214426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.379 [2024-11-20 13:35:36.216490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.379 [2024-11-20 13:35:36.216636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.379 [2024-11-20 13:35:36.216727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.379 [2024-11-20 13:35:36.216735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.379 [2024-11-20 13:35:36.276237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.024 13:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.024 13:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:25.024 13:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.282 [2024-11-20 13:35:37.209837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.282 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:25.282 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.541 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.541 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:25.799 Malloc1 00:16:25.799 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.069 13:35:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:26.329 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:26.588 [2024-11-20 13:35:38.429516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.588 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:26.847 13:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:27.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:27.106 fio-3.35 00:16:27.106 Starting 1 thread 00:16:29.637 00:16:29.637 test: (groupid=0, jobs=1): err= 0: pid=75360: Wed Nov 20 13:35:41 2024 00:16:29.637 read: IOPS=8705, BW=34.0MiB/s (35.7MB/s)(68.2MiB/2006msec) 00:16:29.637 slat (usec): min=2, max=335, avg= 2.42, stdev= 2.87 00:16:29.637 clat (usec): min=1918, max=13650, avg=7645.11, stdev=546.20 00:16:29.637 lat (usec): min=1951, max=13652, avg=7647.53, stdev=545.89 00:16:29.637 clat percentiles (usec): 00:16:29.637 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7242], 00:16:29.637 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:16:29.637 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:16:29.637 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[12387], 99.95th=[12649], 00:16:29.637 | 99.99th=[13566] 00:16:29.637 bw ( KiB/s): min=33496, max=35632, per=99.89%, avg=34784.00, stdev=908.41, samples=4 00:16:29.637 iops : min= 8376, max= 8908, avg=8696.50, stdev=226.16, samples=4 00:16:29.637 write: IOPS=8698, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2006msec); 0 zone resets 00:16:29.637 slat (usec): min=2, max=140, avg= 2.49, stdev= 1.36 00:16:29.637 clat (usec): min=1821, max=12563, avg=6963.13, stdev=484.74 00:16:29.637 lat (usec): min=1834, max=12565, avg=6965.62, stdev=484.55 00:16:29.637 clat percentiles (usec): 00:16:29.637 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:16:29.637 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:16:29.637 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7635], 00:16:29.637 | 99.00th=[ 8029], 99.50th=[ 8225], 99.90th=[10552], 99.95th=[11338], 00:16:29.637 | 99.99th=[12387] 00:16:29.637 bw ( KiB/s): min=34376, max=35136, per=99.97%, avg=34786.00, stdev=312.20, samples=4 00:16:29.637 iops : min= 8594, max= 8784, avg=8696.50, stdev=78.05, samples=4 00:16:29.637 lat (msec) : 2=0.02%, 4=0.14%, 10=99.64%, 20=0.20% 00:16:29.637 cpu : usr=72.17%, sys=20.95%, ctx=11, majf=0, minf=7 00:16:29.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:29.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.637 issued rwts: total=17463,17450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.637 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.637 00:16:29.637 Run status group 0 (all jobs): 00:16:29.637 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.2MiB (71.5MB), run=2006-2006msec 00:16:29.637 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2006-2006msec 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:29.637 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:29.638 13:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:29.638 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:29.638 fio-3.35 00:16:29.638 Starting 1 thread 00:16:32.220 00:16:32.220 test: (groupid=0, jobs=1): err= 0: pid=75410: Wed Nov 20 13:35:43 2024 00:16:32.220 read: IOPS=8251, BW=129MiB/s (135MB/s)(259MiB/2012msec) 00:16:32.220 slat (usec): min=3, max=121, avg= 3.69, stdev= 1.71 00:16:32.220 clat (usec): min=3098, max=17971, avg=8618.35, stdev=2539.45 00:16:32.220 lat (usec): min=3101, max=17975, avg=8622.04, stdev=2539.50 00:16:32.220 clat percentiles (usec): 00:16:32.220 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6325], 00:16:32.220 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8979], 00:16:32.220 | 70.00th=[10028], 80.00th=[10683], 90.00th=[12125], 95.00th=[13173], 00:16:32.220 | 99.00th=[15401], 99.50th=[16188], 99.90th=[17433], 99.95th=[17695], 00:16:32.220 | 99.99th=[17957] 00:16:32.220 bw ( KiB/s): min=54336, max=76224, per=51.23%, avg=67632.00, stdev=9893.94, samples=4 00:16:32.220 iops : min= 3396, max= 4764, avg=4227.00, stdev=618.37, samples=4 00:16:32.220 write: IOPS=4829, BW=75.5MiB/s (79.1MB/s)(138MiB/1828msec); 0 zone resets 00:16:32.220 slat (usec): min=36, max=407, avg=37.96, stdev= 7.45 00:16:32.220 clat (usec): min=5261, max=20744, avg=12176.81, stdev=2200.11 00:16:32.220 lat (usec): min=5309, max=20787, avg=12214.77, stdev=2199.76 00:16:32.220 clat percentiles (usec): 00:16:32.220 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:16:32.220 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:16:32.220 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15008], 95.00th=[16319], 00:16:32.220 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20055], 99.95th=[20317], 00:16:32.220 | 99.99th=[20841] 00:16:32.220 bw ( KiB/s): min=57088, max=79616, per=90.99%, avg=70312.00, stdev=10511.64, samples=4 00:16:32.220 iops : min= 3568, max= 4976, avg=4394.50, stdev=656.98, samples=4 00:16:32.220 lat (msec) : 4=0.20%, 10=50.41%, 20=49.31%, 50=0.07% 00:16:32.220 cpu : usr=82.60%, sys=13.63%, ctx=4, majf=0, minf=20 00:16:32.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:32.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.220 issued rwts: total=16602,8829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.220 00:16:32.220 Run status group 0 (all jobs): 00:16:32.220 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2012-2012msec 00:16:32.220 WRITE: bw=75.5MiB/s (79.1MB/s), 75.5MiB/s-75.5MiB/s (79.1MB/s-79.1MB/s), io=138MiB (145MB), run=1828-1828msec 00:16:32.220 13:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.220 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.221 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.221 rmmod nvme_tcp 00:16:32.221 rmmod nvme_fabrics 00:16:32.497 rmmod nvme_keyring 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75277 ']' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75277 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75277 ']' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75277 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75277 00:16:32.497 killing process with pid 75277 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75277' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75277 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75277 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.497 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:32.756 00:16:32.756 real 0m9.446s 00:16:32.756 user 0m37.610s 00:16:32.756 sys 0m2.435s 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.756 13:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.756 ************************************ 00:16:32.756 END TEST nvmf_fio_host 00:16:32.756 ************************************ 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.016 ************************************ 00:16:33.016 START TEST nvmf_failover 00:16:33.016 ************************************ 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:33.016 * Looking for test storage... 00:16:33.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.016 --rc genhtml_branch_coverage=1 00:16:33.016 --rc genhtml_function_coverage=1 00:16:33.016 --rc genhtml_legend=1 00:16:33.016 --rc geninfo_all_blocks=1 00:16:33.016 --rc geninfo_unexecuted_blocks=1 00:16:33.016 00:16:33.016 ' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.016 --rc genhtml_branch_coverage=1 00:16:33.016 --rc genhtml_function_coverage=1 00:16:33.016 --rc genhtml_legend=1 00:16:33.016 --rc geninfo_all_blocks=1 00:16:33.016 --rc geninfo_unexecuted_blocks=1 00:16:33.016 00:16:33.016 ' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.016 --rc genhtml_branch_coverage=1 00:16:33.016 --rc genhtml_function_coverage=1 00:16:33.016 --rc genhtml_legend=1 00:16:33.016 --rc geninfo_all_blocks=1 00:16:33.016 --rc geninfo_unexecuted_blocks=1 00:16:33.016 00:16:33.016 ' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:33.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.016 --rc genhtml_branch_coverage=1 00:16:33.016 --rc genhtml_function_coverage=1 00:16:33.016 --rc genhtml_legend=1 00:16:33.016 --rc geninfo_all_blocks=1 00:16:33.016 --rc geninfo_unexecuted_blocks=1 00:16:33.016 00:16:33.016 ' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.016 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.276 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.277 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.277 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:33.277 Cannot find device "nvmf_init_br" 00:16:33.277 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:33.277 13:35:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:33.277 Cannot find device "nvmf_init_br2" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:33.277 Cannot find device "nvmf_tgt_br" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.277 Cannot find device "nvmf_tgt_br2" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:33.277 Cannot find device "nvmf_init_br" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:33.277 Cannot find device "nvmf_init_br2" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:33.277 Cannot find device "nvmf_tgt_br" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:33.277 Cannot find device "nvmf_tgt_br2" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:33.277 Cannot find device "nvmf_br" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:33.277 Cannot find device "nvmf_init_if" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:33.277 Cannot find device "nvmf_init_if2" 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.277 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:33.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:33.537 00:16:33.537 --- 10.0.0.3 ping statistics --- 00:16:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.537 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:33.537 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:33.537 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:16:33.537 00:16:33.537 --- 10.0.0.4 ping statistics --- 00:16:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.537 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:33.537 00:16:33.537 --- 10.0.0.1 ping statistics --- 00:16:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.537 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:33.537 00:16:33.537 --- 10.0.0.2 ping statistics --- 00:16:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.537 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75680 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75680 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75680 ']' 00:16:33.537 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.538 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.538 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.538 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.538 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:33.538 [2024-11-20 13:35:45.470658] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:33.538 [2024-11-20 13:35:45.470755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.797 [2024-11-20 13:35:45.622420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.798 [2024-11-20 13:35:45.693036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.798 [2024-11-20 13:35:45.693104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.798 [2024-11-20 13:35:45.693118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.798 [2024-11-20 13:35:45.693129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.798 [2024-11-20 13:35:45.693138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.798 [2024-11-20 13:35:45.694329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.798 [2024-11-20 13:35:45.694465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.798 [2024-11-20 13:35:45.694474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.798 [2024-11-20 13:35:45.751613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.056 13:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.315 [2024-11-20 13:35:46.140353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.315 13:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:34.574 Malloc0 00:16:34.574 13:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.833 13:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.092 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:35.350 [2024-11-20 13:35:47.263251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.350 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:35.608 [2024-11-20 13:35:47.523457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:35.608 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:35.868 [2024-11-20 13:35:47.787686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75730 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75730 /var/tmp/bdevperf.sock 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75730 ']' 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.868 13:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:37.244 13:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.244 13:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:37.244 13:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:37.244 NVMe0n1 00:16:37.503 13:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:37.761 00:16:37.761 13:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75755 00:16:37.761 13:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:37.761 13:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:38.697 13:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:38.982 13:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:42.282 13:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:42.282 00:16:42.282 13:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:42.849 13:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:46.198 13:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:46.198 [2024-11-20 13:35:57.841567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.198 13:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:47.132 13:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:47.392 13:35:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75755 00:16:52.740 { 00:16:52.740 "results": [ 00:16:52.740 { 00:16:52.740 "job": "NVMe0n1", 00:16:52.740 "core_mask": "0x1", 00:16:52.740 "workload": "verify", 00:16:52.740 "status": "finished", 00:16:52.740 "verify_range": { 00:16:52.740 "start": 0, 00:16:52.740 "length": 16384 00:16:52.740 }, 00:16:52.740 "queue_depth": 128, 00:16:52.740 "io_size": 4096, 00:16:52.740 "runtime": 15.008306, 00:16:52.740 "iops": 8834.641297958611, 00:16:52.740 "mibps": 34.510317570150825, 00:16:52.740 "io_failed": 3317, 00:16:52.740 "io_timeout": 0, 00:16:52.740 "avg_latency_us": 14101.906056013004, 00:16:52.740 "min_latency_us": 651.6363636363636, 00:16:52.740 "max_latency_us": 15728.64 00:16:52.740 } 00:16:52.740 ], 00:16:52.740 "core_count": 1 00:16:52.740 } 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75730 ']' 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.004 killing process with pid 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75730' 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75730 00:16:53.004 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:53.004 [2024-11-20 13:35:47.857498] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:53.004 [2024-11-20 13:35:47.857605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75730 ] 00:16:53.004 [2024-11-20 13:35:48.004025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.004 [2024-11-20 13:35:48.072260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.004 [2024-11-20 13:35:48.125999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.004 Running I/O for 15 seconds... 00:16:53.004 6820.00 IOPS, 26.64 MiB/s [2024-11-20T13:36:04.961Z] [2024-11-20 13:35:50.831143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.004 [2024-11-20 13:35:50.831236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.831977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.831991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.005 [2024-11-20 13:35:50.832495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.005 [2024-11-20 13:35:50.832512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.832705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.832972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.832988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.833229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.833259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.006 [2024-11-20 13:35:50.833507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.006 [2024-11-20 13:35:50.833717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.006 [2024-11-20 13:35:50.833731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.833980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.833996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-11-20 13:35:50.834946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.007 [2024-11-20 13:35:50.834961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.834975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.834991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:50.835240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b38d30 is same with the state(6) to be set 00:16:53.008 [2024-11-20 13:35:50.835272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.008 [2024-11-20 13:35:50.835288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.008 [2024-11-20 13:35:50.835300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64272 len:8 PRP1 0x0 PRP2 0x0 00:16:53.008 [2024-11-20 13:35:50.835314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835378] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:53.008 [2024-11-20 13:35:50.835436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.008 [2024-11-20 13:35:50.835458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.008 [2024-11-20 13:35:50.835495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.008 [2024-11-20 13:35:50.835522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.008 [2024-11-20 13:35:50.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:50.835564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:53.008 [2024-11-20 13:35:50.839462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:53.008 [2024-11-20 13:35:50.839504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e710 (9): Bad file descriptor 00:16:53.008 [2024-11-20 13:35:50.867202] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:53.008 7650.00 IOPS, 29.88 MiB/s [2024-11-20T13:36:04.965Z] 8044.00 IOPS, 31.42 MiB/s [2024-11-20T13:36:04.965Z] 8329.00 IOPS, 32.54 MiB/s [2024-11-20T13:36:04.965Z] [2024-11-20 13:35:54.507029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.008 [2024-11-20 13:35:54.507377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-11-20 13:35:54.507809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.008 [2024-11-20 13:35:54.507825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.507839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.507855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.507884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.507898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.507914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.507950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.507965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.507981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.507995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.009 [2024-11-20 13:35:54.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-11-20 13:35:54.508809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.009 [2024-11-20 13:35:54.508825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.508839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.508855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.508868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.508884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.508898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.508927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.508957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.508974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.508990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.010 [2024-11-20 13:35:54.509877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.509976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.510006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.510021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.510040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.510057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.010 [2024-11-20 13:35:54.510070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.010 [2024-11-20 13:35:54.510086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.011 [2024-11-20 13:35:54.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.510971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.510987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.511034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.011 [2024-11-20 13:35:54.511094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3d370 is same with the state(6) to be set 00:16:53.011 [2024-11-20 13:35:54.511133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.011 [2024-11-20 13:35:54.511145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.011 [2024-11-20 13:35:54.511156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74888 len:8 PRP1 0x0 PRP2 0x0 00:16:53.011 [2024-11-20 13:35:54.511170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511247] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:53.011 [2024-11-20 13:35:54.511310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.011 [2024-11-20 13:35:54.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.011 [2024-11-20 13:35:54.511361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.011 [2024-11-20 13:35:54.511389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.011 [2024-11-20 13:35:54.511404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.011 [2024-11-20 13:35:54.511417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:54.511431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:53.012 [2024-11-20 13:35:54.515345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:53.012 [2024-11-20 13:35:54.515388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e710 (9): Bad file descriptor 00:16:53.012 [2024-11-20 13:35:54.543056] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:53.012 8411.60 IOPS, 32.86 MiB/s [2024-11-20T13:36:04.969Z] 8521.67 IOPS, 33.29 MiB/s [2024-11-20T13:36:04.969Z] 8602.57 IOPS, 33.60 MiB/s [2024-11-20T13:36:04.969Z] 8663.25 IOPS, 33.84 MiB/s [2024-11-20T13:36:04.969Z] 8705.11 IOPS, 34.00 MiB/s [2024-11-20T13:36:04.969Z] [2024-11-20 13:35:59.152095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.012 [2024-11-20 13:35:59.152951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.152983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.152998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.012 [2024-11-20 13:35:59.153365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.012 [2024-11-20 13:35:59.153381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.153729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.153979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.153995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.013 [2024-11-20 13:35:59.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.013 [2024-11-20 13:35:59.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.013 [2024-11-20 13:35:59.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.154981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.154997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.014 [2024-11-20 13:35:59.155501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.014 [2024-11-20 13:35:59.155760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.014 [2024-11-20 13:35:59.155776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.015 [2024-11-20 13:35:59.155978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.155993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4a4d0 is same with the state(6) to be set 00:16:53.015 [2024-11-20 13:35:59.156010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24952 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25480 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25488 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25496 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25512 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25520 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.015 [2024-11-20 13:35:59.156450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.015 [2024-11-20 13:35:59.156460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25528 len:8 PRP1 0x0 PRP2 0x0 00:16:53.015 [2024-11-20 13:35:59.156473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156537] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:53.015 [2024-11-20 13:35:59.156596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.015 [2024-11-20 13:35:59.156618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.015 [2024-11-20 13:35:59.156649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.015 [2024-11-20 13:35:59.156683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.015 [2024-11-20 13:35:59.156711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.015 [2024-11-20 13:35:59.156726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:53.015 [2024-11-20 13:35:59.156761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e710 (9): Bad file descriptor 00:16:53.015 [2024-11-20 13:35:59.160632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:53.015 [2024-11-20 13:35:59.184580] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:53.015 8704.10 IOPS, 34.00 MiB/s [2024-11-20T13:36:04.972Z] 8736.82 IOPS, 34.13 MiB/s [2024-11-20T13:36:04.972Z] 8766.75 IOPS, 34.25 MiB/s [2024-11-20T13:36:04.972Z] 8791.46 IOPS, 34.34 MiB/s [2024-11-20T13:36:04.972Z] 8813.79 IOPS, 34.43 MiB/s [2024-11-20T13:36:04.972Z] 8833.13 IOPS, 34.50 MiB/s 00:16:53.015 Latency(us) 00:16:53.015 [2024-11-20T13:36:04.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.015 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:53.015 Verification LBA range: start 0x0 length 0x4000 00:16:53.015 NVMe0n1 : 15.01 8834.64 34.51 221.01 0.00 14101.91 651.64 15728.64 00:16:53.015 [2024-11-20T13:36:04.972Z] =================================================================================================================== 00:16:53.015 [2024-11-20T13:36:04.972Z] Total : 8834.64 34.51 221.01 0.00 14101.91 651.64 15728.64 00:16:53.015 Received shutdown signal, test time was about 15.000000 seconds 00:16:53.015 00:16:53.015 Latency(us) 00:16:53.015 [2024-11-20T13:36:04.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.015 [2024-11-20T13:36:04.972Z] =================================================================================================================== 00:16:53.015 [2024-11-20T13:36:04.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75929 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75929 /var/tmp/bdevperf.sock 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75929 ']' 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.015 13:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:54.390 13:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.390 13:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:54.390 13:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:54.390 [2024-11-20 13:36:06.233588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:54.390 13:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:54.648 [2024-11-20 13:36:06.511881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:54.648 13:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:55.213 NVMe0n1 00:16:55.213 13:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:55.472 00:16:55.472 13:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:55.730 00:16:55.730 13:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:55.730 13:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:55.988 13:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.246 13:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:59.550 13:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:59.550 13:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:59.809 13:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76012 00:16:59.809 13:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.809 13:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76012 00:17:00.742 { 00:17:00.742 "results": [ 00:17:00.742 { 00:17:00.742 "job": "NVMe0n1", 00:17:00.742 "core_mask": "0x1", 00:17:00.742 "workload": "verify", 00:17:00.742 "status": "finished", 00:17:00.742 "verify_range": { 00:17:00.742 "start": 0, 00:17:00.742 "length": 16384 00:17:00.742 }, 00:17:00.742 "queue_depth": 128, 00:17:00.742 "io_size": 4096, 00:17:00.742 "runtime": 1.009367, 00:17:00.742 "iops": 6868.661250070589, 00:17:00.742 "mibps": 26.830708008088237, 00:17:00.742 "io_failed": 0, 00:17:00.742 "io_timeout": 0, 00:17:00.742 "avg_latency_us": 18560.45122169335, 00:17:00.742 "min_latency_us": 2293.76, 00:17:00.742 "max_latency_us": 15132.858181818181 00:17:00.742 } 00:17:00.742 ], 00:17:00.742 "core_count": 1 00:17:00.742 } 00:17:00.742 13:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:00.742 [2024-11-20 13:36:04.991590] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:00.742 [2024-11-20 13:36:04.991706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75929 ] 00:17:00.742 [2024-11-20 13:36:05.140558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.742 [2024-11-20 13:36:05.202689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.742 [2024-11-20 13:36:05.256031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.742 [2024-11-20 13:36:08.171526] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:00.742 [2024-11-20 13:36:08.171668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.742 [2024-11-20 13:36:08.171695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.742 [2024-11-20 13:36:08.171715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.742 [2024-11-20 13:36:08.171729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.742 [2024-11-20 13:36:08.171744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.742 [2024-11-20 13:36:08.171758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.742 [2024-11-20 13:36:08.171772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.742 [2024-11-20 13:36:08.171786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.742 [2024-11-20 13:36:08.171801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:00.742 [2024-11-20 13:36:08.171853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:00.742 [2024-11-20 13:36:08.171886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2113710 (9): Bad file descriptor 00:17:00.742 [2024-11-20 13:36:08.178594] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:00.742 Running I/O for 1 seconds... 00:17:00.742 6805.00 IOPS, 26.58 MiB/s 00:17:00.742 Latency(us) 00:17:00.742 [2024-11-20T13:36:12.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.742 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:00.742 Verification LBA range: start 0x0 length 0x4000 00:17:00.742 NVMe0n1 : 1.01 6868.66 26.83 0.00 0.00 18560.45 2293.76 15132.86 00:17:00.742 [2024-11-20T13:36:12.699Z] =================================================================================================================== 00:17:00.742 [2024-11-20T13:36:12.699Z] Total : 6868.66 26.83 0.00 0.00 18560.45 2293.76 15132.86 00:17:00.742 13:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:00.742 13:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:01.320 13:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.578 13:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:01.578 13:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:01.837 13:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.096 13:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:05.380 13:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:05.380 13:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75929 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75929 ']' 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75929 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75929 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.380 killing process with pid 75929 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75929' 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75929 00:17:05.380 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75929 00:17:05.637 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:05.637 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.895 rmmod nvme_tcp 00:17:05.895 rmmod nvme_fabrics 00:17:05.895 rmmod nvme_keyring 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75680 ']' 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75680 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75680 ']' 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75680 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75680 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:05.895 killing process with pid 75680 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75680' 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75680 00:17:05.895 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75680 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.154 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:06.412 00:17:06.412 real 0m33.508s 00:17:06.412 user 2m9.914s 00:17:06.412 sys 0m5.712s 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:06.412 ************************************ 00:17:06.412 END TEST nvmf_failover 00:17:06.412 ************************************ 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.412 ************************************ 00:17:06.412 START TEST nvmf_host_discovery 00:17:06.412 ************************************ 00:17:06.412 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:06.672 * Looking for test storage... 00:17:06.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.672 --rc genhtml_branch_coverage=1 00:17:06.672 --rc genhtml_function_coverage=1 00:17:06.672 --rc genhtml_legend=1 00:17:06.672 --rc geninfo_all_blocks=1 00:17:06.672 --rc geninfo_unexecuted_blocks=1 00:17:06.672 00:17:06.672 ' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.672 --rc genhtml_branch_coverage=1 00:17:06.672 --rc genhtml_function_coverage=1 00:17:06.672 --rc genhtml_legend=1 00:17:06.672 --rc geninfo_all_blocks=1 00:17:06.672 --rc geninfo_unexecuted_blocks=1 00:17:06.672 00:17:06.672 ' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.672 --rc genhtml_branch_coverage=1 00:17:06.672 --rc genhtml_function_coverage=1 00:17:06.672 --rc genhtml_legend=1 00:17:06.672 --rc geninfo_all_blocks=1 00:17:06.672 --rc geninfo_unexecuted_blocks=1 00:17:06.672 00:17:06.672 ' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.672 --rc genhtml_branch_coverage=1 00:17:06.672 --rc genhtml_function_coverage=1 00:17:06.672 --rc genhtml_legend=1 00:17:06.672 --rc geninfo_all_blocks=1 00:17:06.672 --rc geninfo_unexecuted_blocks=1 00:17:06.672 00:17:06.672 ' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.672 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.673 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:06.673 Cannot find device "nvmf_init_br" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:06.673 Cannot find device "nvmf_init_br2" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:06.673 Cannot find device "nvmf_tgt_br" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.673 Cannot find device "nvmf_tgt_br2" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:06.673 Cannot find device "nvmf_init_br" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:06.673 Cannot find device "nvmf_init_br2" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:06.673 Cannot find device "nvmf_tgt_br" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:06.673 Cannot find device "nvmf_tgt_br2" 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:06.673 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:06.932 Cannot find device "nvmf_br" 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:06.932 Cannot find device "nvmf_init_if" 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:06.932 Cannot find device "nvmf_init_if2" 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.932 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:06.933 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:07.192 00:17:07.192 --- 10.0.0.3 ping statistics --- 00:17:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.192 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:17:07.192 00:17:07.192 --- 10.0.0.4 ping statistics --- 00:17:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.192 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:07.192 00:17:07.192 --- 10.0.0.1 ping statistics --- 00:17:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.192 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:07.192 00:17:07.192 --- 10.0.0.2 ping statistics --- 00:17:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.192 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76339 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76339 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76339 ']' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.192 13:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.192 [2024-11-20 13:36:19.048133] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:07.192 [2024-11-20 13:36:19.048330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.451 [2024-11-20 13:36:19.216544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.451 [2024-11-20 13:36:19.285687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.451 [2024-11-20 13:36:19.285744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.451 [2024-11-20 13:36:19.285757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.451 [2024-11-20 13:36:19.285768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.452 [2024-11-20 13:36:19.285779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.452 [2024-11-20 13:36:19.286287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.452 [2024-11-20 13:36:19.345556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 [2024-11-20 13:36:20.098533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 [2024-11-20 13:36:20.106700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 null0 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 null1 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76379 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76379 /tmp/host.sock 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76379 ']' 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.388 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.388 13:36:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.388 [2024-11-20 13:36:20.201317] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:08.388 [2024-11-20 13:36:20.201418] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:17:08.646 [2024-11-20 13:36:20.352952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.646 [2024-11-20 13:36:20.431090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.646 [2024-11-20 13:36:20.495987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.583 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:09.842 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.843 [2024-11-20 13:36:21.659062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.843 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:10.101 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:10.102 13:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:10.360 [2024-11-20 13:36:22.311897] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:10.360 [2024-11-20 13:36:22.311974] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:10.360 [2024-11-20 13:36:22.311999] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:10.618 [2024-11-20 13:36:22.317945] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:10.618 [2024-11-20 13:36:22.372374] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:10.618 [2024-11-20 13:36:22.373528] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1274e60:1 started. 00:17:10.618 [2024-11-20 13:36:22.375498] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:10.618 [2024-11-20 13:36:22.375527] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:10.618 [2024-11-20 13:36:22.380486] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1274e60 was disconnected and freed. delete nvme_qpair. 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.185 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:11.186 13:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.186 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:11.445 [2024-11-20 13:36:23.144451] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x124d4a0:1 started. 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.445 [2024-11-20 13:36:23.150917] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x124d4a0 was disconnected and freed. delete nvme_qpair. 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.445 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 [2024-11-20 13:36:23.252948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:11.446 [2024-11-20 13:36:23.253818] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:11.446 [2024-11-20 13:36:23.253980] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.446 [2024-11-20 13:36:23.259809] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:11.446 [2024-11-20 13:36:23.325228] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:11.446 [2024-11-20 13:36:23.325448] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:11.446 [2024-11-20 13:36:23.325686] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:11.446 [2024-11-20 13:36:23.325808] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:11.446 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.706 [2024-11-20 13:36:23.482266] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:11.706 [2024-11-20 13:36:23.482306] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:11.706 [2024-11-20 13:36:23.482658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.706 [2024-11-20 13:36:23.482694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.706 [2024-11-20 13:36:23.482709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.706 [2024-11-20 13:36:23.482719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.706 [2024-11-20 13:36:23.482729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.706 [2024-11-20 13:36:23.482738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.706 [2024-11-20 13:36:23.482748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.706 [2024-11-20 13:36:23.482762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.706 [2024-11-20 13:36:23.482772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1251230 is same with the state(6) to be set 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:11.706 [2024-11-20 13:36:23.488242] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:11.706 [2024-11-20 13:36:23.488268] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:11.706 [2024-11-20 13:36:23.488340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1251230 (9): Bad file descriptor 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.706 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.707 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.966 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.967 13:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.973 [2024-11-20 13:36:24.902695] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:12.973 [2024-11-20 13:36:24.902733] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:12.973 [2024-11-20 13:36:24.902752] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:12.973 [2024-11-20 13:36:24.908730] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:13.233 [2024-11-20 13:36:24.967099] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:13.233 [2024-11-20 13:36:24.968002] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1281770:1 started. 00:17:13.233 [2024-11-20 13:36:24.969821] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:13.233 [2024-11-20 13:36:24.970039] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.233 [2024-11-20 13:36:24.971607] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1281770 was disconnected and freed. delete nvme_qpair. 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.233 request: 00:17:13.233 { 00:17:13.233 "name": "nvme", 00:17:13.233 "trtype": "tcp", 00:17:13.233 "traddr": "10.0.0.3", 00:17:13.233 "adrfam": "ipv4", 00:17:13.233 "trsvcid": "8009", 00:17:13.233 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:13.233 "wait_for_attach": true, 00:17:13.233 "method": "bdev_nvme_start_discovery", 00:17:13.233 "req_id": 1 00:17:13.233 } 00:17:13.233 Got JSON-RPC error response 00:17:13.233 response: 00:17:13.233 { 00:17:13.233 "code": -17, 00:17:13.233 "message": "File exists" 00:17:13.233 } 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.233 13:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.233 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 request: 00:17:13.234 { 00:17:13.234 "name": "nvme_second", 00:17:13.234 "trtype": "tcp", 00:17:13.234 "traddr": "10.0.0.3", 00:17:13.234 "adrfam": "ipv4", 00:17:13.234 "trsvcid": "8009", 00:17:13.234 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:13.234 "wait_for_attach": true, 00:17:13.234 "method": "bdev_nvme_start_discovery", 00:17:13.234 "req_id": 1 00:17:13.234 } 00:17:13.234 Got JSON-RPC error response 00:17:13.234 response: 00:17:13.234 { 00:17:13.234 "code": -17, 00:17:13.234 "message": "File exists" 00:17:13.234 } 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:13.234 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.493 13:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.428 [2024-11-20 13:36:26.254535] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:14.428 [2024-11-20 13:36:26.254781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124cbd0 with addr=10.0.0.3, port=8010 00:17:14.428 [2024-11-20 13:36:26.254941] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:14.428 [2024-11-20 13:36:26.255183] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:14.428 [2024-11-20 13:36:26.255247] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:15.364 [2024-11-20 13:36:27.254529] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.364 [2024-11-20 13:36:27.254600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124cbd0 with addr=10.0.0.3, port=8010 00:17:15.364 [2024-11-20 13:36:27.254627] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:15.364 [2024-11-20 13:36:27.254638] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:15.364 [2024-11-20 13:36:27.254649] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:16.298 [2024-11-20 13:36:28.254371] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:16.558 request: 00:17:16.558 { 00:17:16.558 "name": "nvme_second", 00:17:16.558 "trtype": "tcp", 00:17:16.558 "traddr": "10.0.0.3", 00:17:16.558 "adrfam": "ipv4", 00:17:16.558 "trsvcid": "8010", 00:17:16.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:16.558 "wait_for_attach": false, 00:17:16.558 "attach_timeout_ms": 3000, 00:17:16.558 "method": "bdev_nvme_start_discovery", 00:17:16.558 "req_id": 1 00:17:16.558 } 00:17:16.558 Got JSON-RPC error response 00:17:16.558 response: 00:17:16.558 { 00:17:16.558 "code": -110, 00:17:16.558 "message": "Connection timed out" 00:17:16.558 } 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76379 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.558 rmmod nvme_tcp 00:17:16.558 rmmod nvme_fabrics 00:17:16.558 rmmod nvme_keyring 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76339 ']' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76339 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76339 ']' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76339 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76339 00:17:16.558 killing process with pid 76339 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76339' 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76339 00:17:16.558 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76339 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:16.818 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:17.077 00:17:17.077 real 0m10.581s 00:17:17.077 user 0m19.933s 00:17:17.077 sys 0m2.118s 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.077 ************************************ 00:17:17.077 END TEST nvmf_host_discovery 00:17:17.077 ************************************ 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.077 ************************************ 00:17:17.077 START TEST nvmf_host_multipath_status 00:17:17.077 ************************************ 00:17:17.077 13:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:17.077 * Looking for test storage... 00:17:17.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:17.077 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.077 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.077 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.337 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.338 --rc genhtml_branch_coverage=1 00:17:17.338 --rc genhtml_function_coverage=1 00:17:17.338 --rc genhtml_legend=1 00:17:17.338 --rc geninfo_all_blocks=1 00:17:17.338 --rc geninfo_unexecuted_blocks=1 00:17:17.338 00:17:17.338 ' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.338 --rc genhtml_branch_coverage=1 00:17:17.338 --rc genhtml_function_coverage=1 00:17:17.338 --rc genhtml_legend=1 00:17:17.338 --rc geninfo_all_blocks=1 00:17:17.338 --rc geninfo_unexecuted_blocks=1 00:17:17.338 00:17:17.338 ' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.338 --rc genhtml_branch_coverage=1 00:17:17.338 --rc genhtml_function_coverage=1 00:17:17.338 --rc genhtml_legend=1 00:17:17.338 --rc geninfo_all_blocks=1 00:17:17.338 --rc geninfo_unexecuted_blocks=1 00:17:17.338 00:17:17.338 ' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.338 --rc genhtml_branch_coverage=1 00:17:17.338 --rc genhtml_function_coverage=1 00:17:17.338 --rc genhtml_legend=1 00:17:17.338 --rc geninfo_all_blocks=1 00:17:17.338 --rc geninfo_unexecuted_blocks=1 00:17:17.338 00:17:17.338 ' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.338 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:17.338 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:17.339 Cannot find device "nvmf_init_br" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:17.339 Cannot find device "nvmf_init_br2" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:17.339 Cannot find device "nvmf_tgt_br" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.339 Cannot find device "nvmf_tgt_br2" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:17.339 Cannot find device "nvmf_init_br" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:17.339 Cannot find device "nvmf_init_br2" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:17.339 Cannot find device "nvmf_tgt_br" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:17.339 Cannot find device "nvmf_tgt_br2" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:17.339 Cannot find device "nvmf_br" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:17.339 Cannot find device "nvmf_init_if" 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:17.339 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:17.599 Cannot find device "nvmf_init_if2" 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:17.599 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:17.600 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.600 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:17.600 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.600 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:17.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:17:17.859 00:17:17.859 --- 10.0.0.3 ping statistics --- 00:17:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.859 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:17.859 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:17.859 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:17:17.859 00:17:17.859 --- 10.0.0.4 ping statistics --- 00:17:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.859 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:17.859 00:17:17.859 --- 10.0.0.1 ping statistics --- 00:17:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.859 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:17.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:17.859 00:17:17.859 --- 10.0.0.2 ping statistics --- 00:17:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.859 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76881 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76881 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76881 ']' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.859 13:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:17.859 [2024-11-20 13:36:29.675917] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:17.859 [2024-11-20 13:36:29.676004] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.118 [2024-11-20 13:36:29.826985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.118 [2024-11-20 13:36:29.898122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.118 [2024-11-20 13:36:29.898213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.118 [2024-11-20 13:36:29.898229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.118 [2024-11-20 13:36:29.898240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.118 [2024-11-20 13:36:29.898249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.118 [2024-11-20 13:36:29.899486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.118 [2024-11-20 13:36:29.899499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.118 [2024-11-20 13:36:29.957338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76881 00:17:18.118 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:18.685 [2024-11-20 13:36:30.348166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.685 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:18.685 Malloc0 00:17:18.944 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:19.203 13:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.462 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:19.721 [2024-11-20 13:36:31.532304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.721 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:19.980 [2024-11-20 13:36:31.852457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:19.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76929 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76929 /var/tmp/bdevperf.sock 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76929 ']' 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.981 13:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:21.357 13:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.357 13:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:21.357 13:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:21.357 13:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:21.615 Nvme0n1 00:17:21.615 13:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:22.182 Nvme0n1 00:17:22.182 13:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:22.182 13:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:24.084 13:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:24.084 13:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:24.343 13:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:24.601 13:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.975 13:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:26.234 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.234 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:26.234 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.234 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:26.493 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:26.493 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:26.751 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.751 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:27.009 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.009 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:27.009 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.009 13:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:27.268 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.268 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:27.268 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.268 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:27.536 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.536 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:27.536 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:28.110 13:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:28.110 13:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.487 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:29.744 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:29.744 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:29.744 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.744 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:30.001 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.001 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:30.001 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.001 13:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:30.258 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.258 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:30.258 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.259 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:30.823 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.823 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:30.823 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.823 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:31.080 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.080 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:31.080 13:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:31.337 13:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:31.594 13:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:32.529 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:32.529 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:32.529 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.529 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:33.096 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.096 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:33.096 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.096 13:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:33.354 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:33.354 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:33.354 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.354 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:33.612 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.612 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:33.612 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.612 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:33.935 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.935 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:33.935 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:33.935 13:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.194 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.194 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:34.194 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.194 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:34.452 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.452 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:34.452 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:34.711 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:34.969 13:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:35.905 13:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:35.905 13:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:35.905 13:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.163 13:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:36.421 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.421 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:36.421 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.421 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:36.679 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:36.679 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:36.679 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.679 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:36.937 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.937 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:36.937 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.937 13:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:37.196 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.196 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:37.196 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.196 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:37.454 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.454 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:37.454 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.454 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:38.020 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:38.020 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:38.020 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:38.020 13:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:38.586 13:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.630 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:40.197 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:40.197 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:40.197 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.197 13:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:40.456 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.456 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:40.456 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.456 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:40.714 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.714 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:40.714 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.714 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:40.973 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:40.973 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:40.973 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:40.973 13:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.230 13:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:41.230 13:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:41.230 13:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:41.489 13:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:42.057 13:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:42.992 13:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:42.992 13:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:42.992 13:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.992 13:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:43.258 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:43.258 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:43.258 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.258 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:43.516 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.516 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:43.516 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.516 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.775 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.775 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.775 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.775 13:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:44.339 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.339 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:44.339 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.339 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:44.597 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.597 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:44.597 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.597 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:44.856 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.856 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:45.115 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:45.115 13:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:45.373 13:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:45.632 13:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:46.566 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:46.566 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:46.566 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.566 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:47.132 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.132 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:47.132 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.132 13:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:47.133 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.133 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:47.133 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.133 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:47.699 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.699 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:47.699 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.699 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:47.958 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.958 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:47.958 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:47.959 13:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.218 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.218 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:48.218 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.218 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:48.476 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.476 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:48.476 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:48.734 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:48.992 13:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:50.367 13:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:50.367 13:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:50.367 13:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.367 13:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:50.367 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:50.367 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:50.367 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.367 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:50.938 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.938 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:50.938 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:50.938 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.198 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.198 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:51.198 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.198 13:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.455 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.455 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:51.455 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.455 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:51.714 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.714 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:51.714 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.714 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:51.973 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.973 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:51.973 13:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:52.231 13:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:52.489 13:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:53.424 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:53.424 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:53.424 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.424 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:53.990 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.990 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:53.990 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.990 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:54.249 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.249 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.249 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.249 13:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:54.508 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.508 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:54.508 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.508 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:54.767 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.767 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:54.767 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.767 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.026 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.026 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:55.026 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.026 13:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.284 13:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.284 13:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:55.284 13:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:55.541 13:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:56.105 13:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:57.040 13:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:57.040 13:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:57.040 13:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.040 13:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.298 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.298 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.298 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.298 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:57.555 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.555 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.555 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.555 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:57.813 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.813 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:57.813 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.813 13:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.379 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.379 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:58.379 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.379 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.637 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.637 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:58.637 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.637 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76929 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76929 ']' 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76929 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76929 00:17:58.896 killing process with pid 76929 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76929' 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76929 00:17:58.896 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76929 00:17:58.896 { 00:17:58.896 "results": [ 00:17:58.896 { 00:17:58.896 "job": "Nvme0n1", 00:17:58.896 "core_mask": "0x4", 00:17:58.896 "workload": "verify", 00:17:58.896 "status": "terminated", 00:17:58.896 "verify_range": { 00:17:58.896 "start": 0, 00:17:58.896 "length": 16384 00:17:58.896 }, 00:17:58.896 "queue_depth": 128, 00:17:58.896 "io_size": 4096, 00:17:58.896 "runtime": 36.701864, 00:17:58.896 "iops": 8146.261999118083, 00:17:58.896 "mibps": 31.82133593405501, 00:17:58.896 "io_failed": 0, 00:17:58.896 "io_timeout": 0, 00:17:58.896 "avg_latency_us": 15680.038504931721, 00:17:58.896 "min_latency_us": 454.2836363636364, 00:17:58.896 "max_latency_us": 4026531.84 00:17:58.896 } 00:17:58.896 ], 00:17:58.896 "core_count": 1 00:17:58.896 } 00:17:59.159 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76929 00:17:59.159 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:59.159 [2024-11-20 13:36:31.919101] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:59.159 [2024-11-20 13:36:31.919243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76929 ] 00:17:59.159 [2024-11-20 13:36:32.064657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.159 [2024-11-20 13:36:32.137087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.159 [2024-11-20 13:36:32.195806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.159 Running I/O for 90 seconds... 00:17:59.159 6677.00 IOPS, 26.08 MiB/s [2024-11-20T13:37:11.116Z] 6730.00 IOPS, 26.29 MiB/s [2024-11-20T13:37:11.116Z] 6748.00 IOPS, 26.36 MiB/s [2024-11-20T13:37:11.116Z] 6729.00 IOPS, 26.29 MiB/s [2024-11-20T13:37:11.116Z] 6737.00 IOPS, 26.32 MiB/s [2024-11-20T13:37:11.116Z] 6767.83 IOPS, 26.44 MiB/s [2024-11-20T13:37:11.116Z] 7071.86 IOPS, 27.62 MiB/s [2024-11-20T13:37:11.116Z] 7301.88 IOPS, 28.52 MiB/s [2024-11-20T13:37:11.116Z] 7413.22 IOPS, 28.96 MiB/s [2024-11-20T13:37:11.116Z] 7564.80 IOPS, 29.55 MiB/s [2024-11-20T13:37:11.116Z] 7698.73 IOPS, 30.07 MiB/s [2024-11-20T13:37:11.116Z] 7811.83 IOPS, 30.51 MiB/s [2024-11-20T13:37:11.116Z] 7898.92 IOPS, 30.86 MiB/s [2024-11-20T13:37:11.116Z] 7979.29 IOPS, 31.17 MiB/s [2024-11-20T13:37:11.116Z] 8052.20 IOPS, 31.45 MiB/s [2024-11-20T13:37:11.116Z] [2024-11-20 13:36:49.918551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.159 [2024-11-20 13:36:49.918627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.159 [2024-11-20 13:36:49.918713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.159 [2024-11-20 13:36:49.918754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.159 [2024-11-20 13:36:49.918791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.918829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.918867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.918942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.918963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.159 [2024-11-20 13:36:49.919270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.159 [2024-11-20 13:36:49.919295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.919499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.919971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.919993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.160 [2024-11-20 13:36:49.920379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.160 [2024-11-20 13:36:49.920795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.160 [2024-11-20 13:36:49.920810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.920848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.920870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.920886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.920907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.920923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.920958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.920974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.920997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.921889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.921937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.921974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.161 [2024-11-20 13:36:49.922223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.922260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.922298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.922336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.161 [2024-11-20 13:36:49.922358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.161 [2024-11-20 13:36:49.922373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.922878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.922916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.922960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.922983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.922998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.923036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.923072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.923109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.162 [2024-11-20 13:36:49.923152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.923723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.923739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.162 [2024-11-20 13:36:49.924144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.162 [2024-11-20 13:36:49.924173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.162 8041.94 IOPS, 31.41 MiB/s [2024-11-20T13:37:11.119Z] 7568.88 IOPS, 29.57 MiB/s [2024-11-20T13:37:11.119Z] 7148.39 IOPS, 27.92 MiB/s [2024-11-20T13:37:11.119Z] 6772.16 IOPS, 26.45 MiB/s [2024-11-20T13:37:11.119Z] 6491.30 IOPS, 25.36 MiB/s [2024-11-20T13:37:11.119Z] 6603.52 IOPS, 25.80 MiB/s [2024-11-20T13:37:11.119Z] 6697.55 IOPS, 26.16 MiB/s [2024-11-20T13:37:11.119Z] 6782.70 IOPS, 26.49 MiB/s [2024-11-20T13:37:11.119Z] 6988.88 IOPS, 27.30 MiB/s [2024-11-20T13:37:11.119Z] 7159.68 IOPS, 27.97 MiB/s [2024-11-20T13:37:11.119Z] 7347.50 IOPS, 28.70 MiB/s [2024-11-20T13:37:11.119Z] 7483.63 IOPS, 29.23 MiB/s [2024-11-20T13:37:11.119Z] 7533.50 IOPS, 29.43 MiB/s [2024-11-20T13:37:11.119Z] 7563.66 IOPS, 29.55 MiB/s [2024-11-20T13:37:11.120Z] 7608.87 IOPS, 29.72 MiB/s [2024-11-20T13:37:11.120Z] 7734.55 IOPS, 30.21 MiB/s [2024-11-20T13:37:11.120Z] 7875.38 IOPS, 30.76 MiB/s [2024-11-20T13:37:11.120Z] 8006.55 IOPS, 31.28 MiB/s [2024-11-20T13:37:11.120Z] [2024-11-20 13:37:07.748417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.748503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.748698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.748735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.748771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.748808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.748971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.748993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.163 [2024-11-20 13:37:07.749693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.163 [2024-11-20 13:37:07.749788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.163 [2024-11-20 13:37:07.749803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.749825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.749840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.749864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.749881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.749903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.749919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.749941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.749956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.749978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.749993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.750253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.750290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.750328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.750391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.750430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.750498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.750513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.164 [2024-11-20 13:37:07.752848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.752983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.753001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.753023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.753039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.753061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.753076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.164 [2024-11-20 13:37:07.753098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.164 [2024-11-20 13:37:07.753113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.164 8080.26 IOPS, 31.56 MiB/s [2024-11-20T13:37:11.121Z] 8107.91 IOPS, 31.67 MiB/s [2024-11-20T13:37:11.121Z] 8132.42 IOPS, 31.77 MiB/s [2024-11-20T13:37:11.121Z] Received shutdown signal, test time was about 36.702675 seconds 00:17:59.164 00:17:59.164 Latency(us) 00:17:59.164 [2024-11-20T13:37:11.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.164 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.164 Verification LBA range: start 0x0 length 0x4000 00:17:59.164 Nvme0n1 : 36.70 8146.26 31.82 0.00 0.00 15680.04 454.28 4026531.84 00:17:59.164 [2024-11-20T13:37:11.121Z] =================================================================================================================== 00:17:59.164 [2024-11-20T13:37:11.121Z] Total : 8146.26 31.82 0.00 0.00 15680.04 454.28 4026531.84 00:17:59.165 13:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.424 rmmod nvme_tcp 00:17:59.424 rmmod nvme_fabrics 00:17:59.424 rmmod nvme_keyring 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76881 ']' 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76881 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76881 ']' 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76881 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.424 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76881 00:17:59.683 killing process with pid 76881 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76881' 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76881 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76881 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:59.683 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.684 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:59.684 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.684 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.684 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:59.684 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:59.942 ************************************ 00:17:59.942 END TEST nvmf_host_multipath_status 00:17:59.942 ************************************ 00:17:59.942 00:17:59.942 real 0m42.925s 00:17:59.942 user 2m19.564s 00:17:59.942 sys 0m12.776s 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.942 13:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:00.202 13:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:00.202 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.202 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.202 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.202 ************************************ 00:18:00.202 START TEST nvmf_discovery_remove_ifc 00:18:00.202 ************************************ 00:18:00.202 13:37:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:00.202 * Looking for test storage... 00:18:00.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.202 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.203 --rc genhtml_branch_coverage=1 00:18:00.203 --rc genhtml_function_coverage=1 00:18:00.203 --rc genhtml_legend=1 00:18:00.203 --rc geninfo_all_blocks=1 00:18:00.203 --rc geninfo_unexecuted_blocks=1 00:18:00.203 00:18:00.203 ' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.203 --rc genhtml_branch_coverage=1 00:18:00.203 --rc genhtml_function_coverage=1 00:18:00.203 --rc genhtml_legend=1 00:18:00.203 --rc geninfo_all_blocks=1 00:18:00.203 --rc geninfo_unexecuted_blocks=1 00:18:00.203 00:18:00.203 ' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.203 --rc genhtml_branch_coverage=1 00:18:00.203 --rc genhtml_function_coverage=1 00:18:00.203 --rc genhtml_legend=1 00:18:00.203 --rc geninfo_all_blocks=1 00:18:00.203 --rc geninfo_unexecuted_blocks=1 00:18:00.203 00:18:00.203 ' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.203 --rc genhtml_branch_coverage=1 00:18:00.203 --rc genhtml_function_coverage=1 00:18:00.203 --rc genhtml_legend=1 00:18:00.203 --rc geninfo_all_blocks=1 00:18:00.203 --rc geninfo_unexecuted_blocks=1 00:18:00.203 00:18:00.203 ' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.203 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.204 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.204 Cannot find device "nvmf_init_br" 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.204 Cannot find device "nvmf_init_br2" 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:00.204 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.463 Cannot find device "nvmf_tgt_br" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.463 Cannot find device "nvmf_tgt_br2" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.463 Cannot find device "nvmf_init_br" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.463 Cannot find device "nvmf_init_br2" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.463 Cannot find device "nvmf_tgt_br" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.463 Cannot find device "nvmf_tgt_br2" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.463 Cannot find device "nvmf_br" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.463 Cannot find device "nvmf_init_if" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.463 Cannot find device "nvmf_init_if2" 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.463 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:00.722 00:18:00.722 --- 10.0.0.3 ping statistics --- 00:18:00.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.722 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:18:00.722 00:18:00.722 --- 10.0.0.4 ping statistics --- 00:18:00.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.722 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:00.722 00:18:00.722 --- 10.0.0.1 ping statistics --- 00:18:00.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.722 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:00.722 00:18:00.722 --- 10.0.0.2 ping statistics --- 00:18:00.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.722 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77803 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77803 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77803 ']' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.722 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:00.722 [2024-11-20 13:37:12.577490] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:00.722 [2024-11-20 13:37:12.578274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.981 [2024-11-20 13:37:12.729813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.981 [2024-11-20 13:37:12.799007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.981 [2024-11-20 13:37:12.799068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.981 [2024-11-20 13:37:12.799082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.981 [2024-11-20 13:37:12.799093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.981 [2024-11-20 13:37:12.799102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.981 [2024-11-20 13:37:12.799592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.981 [2024-11-20 13:37:12.856557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.981 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.981 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:00.981 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.981 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.981 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.240 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.240 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:01.240 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.240 13:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.240 [2024-11-20 13:37:12.978292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.240 [2024-11-20 13:37:12.986469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:01.240 null0 00:18:01.240 [2024-11-20 13:37:13.018407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77827 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77827 /tmp/host.sock 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77827 ']' 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.240 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.240 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.240 [2024-11-20 13:37:13.094528] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:01.240 [2024-11-20 13:37:13.094620] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77827 ] 00:18:01.498 [2024-11-20 13:37:13.234780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.498 [2024-11-20 13:37:13.301840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.498 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.498 [2024-11-20 13:37:13.438170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.756 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.756 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:01.756 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.756 13:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.692 [2024-11-20 13:37:14.501491] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:02.692 [2024-11-20 13:37:14.501551] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:02.692 [2024-11-20 13:37:14.501573] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:02.692 [2024-11-20 13:37:14.507521] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:02.692 [2024-11-20 13:37:14.561978] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:02.692 [2024-11-20 13:37:14.563118] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xcbafc0:1 started. 00:18:02.692 [2024-11-20 13:37:14.565046] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:02.692 [2024-11-20 13:37:14.565110] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:02.692 [2024-11-20 13:37:14.565141] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:02.692 [2024-11-20 13:37:14.565160] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:02.692 [2024-11-20 13:37:14.565201] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:02.692 [2024-11-20 13:37:14.570201] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xcbafc0 was disconnected and freed. delete nvme_qpair. 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:02.692 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:02.951 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.951 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:02.951 13:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:03.888 13:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:04.861 13:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:06.237 13:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:07.173 13:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:08.108 13:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.108 [2024-11-20 13:37:19.992606] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:08.108 [2024-11-20 13:37:19.992684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.108 [2024-11-20 13:37:19.992701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.108 [2024-11-20 13:37:19.992715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.108 [2024-11-20 13:37:19.992725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.108 [2024-11-20 13:37:19.992735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.108 [2024-11-20 13:37:19.992745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.109 [2024-11-20 13:37:19.992755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.109 [2024-11-20 13:37:19.992764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.109 [2024-11-20 13:37:19.992775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.109 [2024-11-20 13:37:19.992784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.109 [2024-11-20 13:37:19.992794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97240 is same with the state(6) to be set 00:18:08.109 [2024-11-20 13:37:20.002600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc97240 (9): Bad file descriptor 00:18:08.109 [2024-11-20 13:37:20.012621] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:08.109 [2024-11-20 13:37:20.012646] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:08.109 [2024-11-20 13:37:20.012653] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:08.109 [2024-11-20 13:37:20.012659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:08.109 [2024-11-20 13:37:20.012702] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:08.109 13:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:08.109 13:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.484 [2024-11-20 13:37:21.048316] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:09.484 [2024-11-20 13:37:21.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc97240 with addr=10.0.0.3, port=4420 00:18:09.484 [2024-11-20 13:37:21.048469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97240 is same with the state(6) to be set 00:18:09.484 [2024-11-20 13:37:21.048539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc97240 (9): Bad file descriptor 00:18:09.484 [2024-11-20 13:37:21.049494] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:09.484 [2024-11-20 13:37:21.049736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:09.484 [2024-11-20 13:37:21.049817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:09.484 [2024-11-20 13:37:21.050137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:09.484 [2024-11-20 13:37:21.050270] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:09.484 [2024-11-20 13:37:21.050494] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:09.484 [2024-11-20 13:37:21.050507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:09.484 [2024-11-20 13:37:21.050575] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:09.484 [2024-11-20 13:37:21.050595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:09.484 13:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:10.421 [2024-11-20 13:37:22.050804] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:10.421 [2024-11-20 13:37:22.050866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:10.421 [2024-11-20 13:37:22.050903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:10.421 [2024-11-20 13:37:22.050916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:10.421 [2024-11-20 13:37:22.050927] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:10.421 [2024-11-20 13:37:22.050938] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:10.421 [2024-11-20 13:37:22.050945] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:10.421 [2024-11-20 13:37:22.050950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:10.421 [2024-11-20 13:37:22.050987] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:10.421 [2024-11-20 13:37:22.051051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-20 13:37:22.051068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.421 [2024-11-20 13:37:22.051082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.421 [2024-11-20 13:37:22.051092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.422 [2024-11-20 13:37:22.051103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.422 [2024-11-20 13:37:22.051112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.422 [2024-11-20 13:37:22.051123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.422 [2024-11-20 13:37:22.051132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.422 [2024-11-20 13:37:22.051143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.422 [2024-11-20 13:37:22.051152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.422 [2024-11-20 13:37:22.051161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:10.422 [2024-11-20 13:37:22.051228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc22a20 (9): Bad file descriptor 00:18:10.422 [2024-11-20 13:37:22.052204] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:10.422 [2024-11-20 13:37:22.052225] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:10.422 13:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:11.360 13:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:12.296 [2024-11-20 13:37:24.056151] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:12.296 [2024-11-20 13:37:24.056204] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:12.296 [2024-11-20 13:37:24.056225] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:12.296 [2024-11-20 13:37:24.062203] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:12.296 [2024-11-20 13:37:24.116560] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:12.296 [2024-11-20 13:37:24.117548] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc75a60:1 started. 00:18:12.296 [2024-11-20 13:37:24.118918] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:12.296 [2024-11-20 13:37:24.118967] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:12.296 [2024-11-20 13:37:24.118993] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:12.296 [2024-11-20 13:37:24.119012] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:12.296 [2024-11-20 13:37:24.119023] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:12.296 [2024-11-20 13:37:24.124879] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc75a60 was disconnected and freed. delete nvme_qpair. 00:18:12.565 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.565 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.565 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77827 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77827 ']' 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77827 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77827 00:18:12.566 killing process with pid 77827 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77827' 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77827 00:18:12.566 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77827 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.828 rmmod nvme_tcp 00:18:12.828 rmmod nvme_fabrics 00:18:12.828 rmmod nvme_keyring 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77803 ']' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77803 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77803 ']' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77803 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77803 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.828 killing process with pid 77803 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77803' 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77803 00:18:12.828 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77803 00:18:13.086 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.086 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.087 13:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.087 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.087 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.087 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:13.345 ************************************ 00:18:13.345 END TEST nvmf_discovery_remove_ifc 00:18:13.345 ************************************ 00:18:13.345 00:18:13.345 real 0m13.223s 00:18:13.345 user 0m22.454s 00:18:13.345 sys 0m2.565s 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.345 ************************************ 00:18:13.345 START TEST nvmf_identify_kernel_target 00:18:13.345 ************************************ 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:13.345 * Looking for test storage... 00:18:13.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.345 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.605 --rc genhtml_branch_coverage=1 00:18:13.605 --rc genhtml_function_coverage=1 00:18:13.605 --rc genhtml_legend=1 00:18:13.605 --rc geninfo_all_blocks=1 00:18:13.605 --rc geninfo_unexecuted_blocks=1 00:18:13.605 00:18:13.605 ' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.605 --rc genhtml_branch_coverage=1 00:18:13.605 --rc genhtml_function_coverage=1 00:18:13.605 --rc genhtml_legend=1 00:18:13.605 --rc geninfo_all_blocks=1 00:18:13.605 --rc geninfo_unexecuted_blocks=1 00:18:13.605 00:18:13.605 ' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.605 --rc genhtml_branch_coverage=1 00:18:13.605 --rc genhtml_function_coverage=1 00:18:13.605 --rc genhtml_legend=1 00:18:13.605 --rc geninfo_all_blocks=1 00:18:13.605 --rc geninfo_unexecuted_blocks=1 00:18:13.605 00:18:13.605 ' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.605 --rc genhtml_branch_coverage=1 00:18:13.605 --rc genhtml_function_coverage=1 00:18:13.605 --rc genhtml_legend=1 00:18:13.605 --rc geninfo_all_blocks=1 00:18:13.605 --rc geninfo_unexecuted_blocks=1 00:18:13.605 00:18:13.605 ' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.605 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:13.606 Cannot find device "nvmf_init_br" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:13.606 Cannot find device "nvmf_init_br2" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:13.606 Cannot find device "nvmf_tgt_br" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.606 Cannot find device "nvmf_tgt_br2" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:13.606 Cannot find device "nvmf_init_br" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:13.606 Cannot find device "nvmf_init_br2" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:13.606 Cannot find device "nvmf_tgt_br" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:13.606 Cannot find device "nvmf_tgt_br2" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:13.606 Cannot find device "nvmf_br" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:13.606 Cannot find device "nvmf_init_if" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:13.606 Cannot find device "nvmf_init_if2" 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:13.606 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:13.865 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:13.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:18:13.866 00:18:13.866 --- 10.0.0.3 ping statistics --- 00:18:13.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.866 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:13.866 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:13.866 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:13.866 00:18:13.866 --- 10.0.0.4 ping statistics --- 00:18:13.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.866 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:13.866 00:18:13.866 --- 10.0.0.1 ping statistics --- 00:18:13.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.866 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:13.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:13.866 00:18:13.866 --- 10.0.0.2 ping statistics --- 00:18:13.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.866 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.866 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:14.125 13:37:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:14.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.383 Waiting for block devices as requested 00:18:14.383 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:14.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:14.642 No valid GPT data, bailing 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:14.642 No valid GPT data, bailing 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:14.642 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:14.902 No valid GPT data, bailing 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:14.902 No valid GPT data, bailing 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:14.902 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -a 10.0.0.1 -t tcp -s 4420 00:18:14.902 00:18:14.902 Discovery Log Number of Records 2, Generation counter 2 00:18:14.902 =====Discovery Log Entry 0====== 00:18:14.902 trtype: tcp 00:18:14.902 adrfam: ipv4 00:18:14.902 subtype: current discovery subsystem 00:18:14.902 treq: not specified, sq flow control disable supported 00:18:14.902 portid: 1 00:18:14.902 trsvcid: 4420 00:18:14.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:14.902 traddr: 10.0.0.1 00:18:14.902 eflags: none 00:18:14.902 sectype: none 00:18:14.902 =====Discovery Log Entry 1====== 00:18:14.903 trtype: tcp 00:18:14.903 adrfam: ipv4 00:18:14.903 subtype: nvme subsystem 00:18:14.903 treq: not specified, sq flow control disable supported 00:18:14.903 portid: 1 00:18:14.903 trsvcid: 4420 00:18:14.903 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:14.903 traddr: 10.0.0.1 00:18:14.903 eflags: none 00:18:14.903 sectype: none 00:18:14.903 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:14.903 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:15.163 ===================================================== 00:18:15.163 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:15.163 ===================================================== 00:18:15.163 Controller Capabilities/Features 00:18:15.163 ================================ 00:18:15.163 Vendor ID: 0000 00:18:15.163 Subsystem Vendor ID: 0000 00:18:15.163 Serial Number: b18f96178d5abe89efff 00:18:15.163 Model Number: Linux 00:18:15.163 Firmware Version: 6.8.9-20 00:18:15.163 Recommended Arb Burst: 0 00:18:15.163 IEEE OUI Identifier: 00 00 00 00:18:15.163 Multi-path I/O 00:18:15.163 May have multiple subsystem ports: No 00:18:15.163 May have multiple controllers: No 00:18:15.163 Associated with SR-IOV VF: No 00:18:15.163 Max Data Transfer Size: Unlimited 00:18:15.163 Max Number of Namespaces: 0 00:18:15.163 Max Number of I/O Queues: 1024 00:18:15.163 NVMe Specification Version (VS): 1.3 00:18:15.163 NVMe Specification Version (Identify): 1.3 00:18:15.163 Maximum Queue Entries: 1024 00:18:15.163 Contiguous Queues Required: No 00:18:15.163 Arbitration Mechanisms Supported 00:18:15.163 Weighted Round Robin: Not Supported 00:18:15.163 Vendor Specific: Not Supported 00:18:15.163 Reset Timeout: 7500 ms 00:18:15.163 Doorbell Stride: 4 bytes 00:18:15.163 NVM Subsystem Reset: Not Supported 00:18:15.163 Command Sets Supported 00:18:15.163 NVM Command Set: Supported 00:18:15.163 Boot Partition: Not Supported 00:18:15.163 Memory Page Size Minimum: 4096 bytes 00:18:15.163 Memory Page Size Maximum: 4096 bytes 00:18:15.163 Persistent Memory Region: Not Supported 00:18:15.163 Optional Asynchronous Events Supported 00:18:15.163 Namespace Attribute Notices: Not Supported 00:18:15.163 Firmware Activation Notices: Not Supported 00:18:15.163 ANA Change Notices: Not Supported 00:18:15.163 PLE Aggregate Log Change Notices: Not Supported 00:18:15.163 LBA Status Info Alert Notices: Not Supported 00:18:15.163 EGE Aggregate Log Change Notices: Not Supported 00:18:15.163 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.163 Zone Descriptor Change Notices: Not Supported 00:18:15.163 Discovery Log Change Notices: Supported 00:18:15.163 Controller Attributes 00:18:15.163 128-bit Host Identifier: Not Supported 00:18:15.163 Non-Operational Permissive Mode: Not Supported 00:18:15.163 NVM Sets: Not Supported 00:18:15.163 Read Recovery Levels: Not Supported 00:18:15.163 Endurance Groups: Not Supported 00:18:15.163 Predictable Latency Mode: Not Supported 00:18:15.163 Traffic Based Keep ALive: Not Supported 00:18:15.163 Namespace Granularity: Not Supported 00:18:15.163 SQ Associations: Not Supported 00:18:15.163 UUID List: Not Supported 00:18:15.163 Multi-Domain Subsystem: Not Supported 00:18:15.163 Fixed Capacity Management: Not Supported 00:18:15.163 Variable Capacity Management: Not Supported 00:18:15.163 Delete Endurance Group: Not Supported 00:18:15.163 Delete NVM Set: Not Supported 00:18:15.163 Extended LBA Formats Supported: Not Supported 00:18:15.163 Flexible Data Placement Supported: Not Supported 00:18:15.163 00:18:15.163 Controller Memory Buffer Support 00:18:15.163 ================================ 00:18:15.163 Supported: No 00:18:15.163 00:18:15.163 Persistent Memory Region Support 00:18:15.163 ================================ 00:18:15.163 Supported: No 00:18:15.163 00:18:15.163 Admin Command Set Attributes 00:18:15.163 ============================ 00:18:15.163 Security Send/Receive: Not Supported 00:18:15.163 Format NVM: Not Supported 00:18:15.163 Firmware Activate/Download: Not Supported 00:18:15.163 Namespace Management: Not Supported 00:18:15.163 Device Self-Test: Not Supported 00:18:15.163 Directives: Not Supported 00:18:15.163 NVMe-MI: Not Supported 00:18:15.163 Virtualization Management: Not Supported 00:18:15.163 Doorbell Buffer Config: Not Supported 00:18:15.163 Get LBA Status Capability: Not Supported 00:18:15.163 Command & Feature Lockdown Capability: Not Supported 00:18:15.163 Abort Command Limit: 1 00:18:15.163 Async Event Request Limit: 1 00:18:15.163 Number of Firmware Slots: N/A 00:18:15.163 Firmware Slot 1 Read-Only: N/A 00:18:15.163 Firmware Activation Without Reset: N/A 00:18:15.163 Multiple Update Detection Support: N/A 00:18:15.163 Firmware Update Granularity: No Information Provided 00:18:15.163 Per-Namespace SMART Log: No 00:18:15.163 Asymmetric Namespace Access Log Page: Not Supported 00:18:15.163 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:15.163 Command Effects Log Page: Not Supported 00:18:15.163 Get Log Page Extended Data: Supported 00:18:15.163 Telemetry Log Pages: Not Supported 00:18:15.163 Persistent Event Log Pages: Not Supported 00:18:15.163 Supported Log Pages Log Page: May Support 00:18:15.163 Commands Supported & Effects Log Page: Not Supported 00:18:15.163 Feature Identifiers & Effects Log Page:May Support 00:18:15.163 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.163 Data Area 4 for Telemetry Log: Not Supported 00:18:15.163 Error Log Page Entries Supported: 1 00:18:15.163 Keep Alive: Not Supported 00:18:15.163 00:18:15.163 NVM Command Set Attributes 00:18:15.163 ========================== 00:18:15.163 Submission Queue Entry Size 00:18:15.163 Max: 1 00:18:15.163 Min: 1 00:18:15.163 Completion Queue Entry Size 00:18:15.163 Max: 1 00:18:15.163 Min: 1 00:18:15.163 Number of Namespaces: 0 00:18:15.163 Compare Command: Not Supported 00:18:15.163 Write Uncorrectable Command: Not Supported 00:18:15.163 Dataset Management Command: Not Supported 00:18:15.163 Write Zeroes Command: Not Supported 00:18:15.163 Set Features Save Field: Not Supported 00:18:15.163 Reservations: Not Supported 00:18:15.163 Timestamp: Not Supported 00:18:15.163 Copy: Not Supported 00:18:15.163 Volatile Write Cache: Not Present 00:18:15.163 Atomic Write Unit (Normal): 1 00:18:15.163 Atomic Write Unit (PFail): 1 00:18:15.163 Atomic Compare & Write Unit: 1 00:18:15.163 Fused Compare & Write: Not Supported 00:18:15.163 Scatter-Gather List 00:18:15.163 SGL Command Set: Supported 00:18:15.163 SGL Keyed: Not Supported 00:18:15.163 SGL Bit Bucket Descriptor: Not Supported 00:18:15.163 SGL Metadata Pointer: Not Supported 00:18:15.163 Oversized SGL: Not Supported 00:18:15.163 SGL Metadata Address: Not Supported 00:18:15.163 SGL Offset: Supported 00:18:15.163 Transport SGL Data Block: Not Supported 00:18:15.163 Replay Protected Memory Block: Not Supported 00:18:15.163 00:18:15.163 Firmware Slot Information 00:18:15.163 ========================= 00:18:15.163 Active slot: 0 00:18:15.163 00:18:15.163 00:18:15.163 Error Log 00:18:15.163 ========= 00:18:15.163 00:18:15.163 Active Namespaces 00:18:15.163 ================= 00:18:15.163 Discovery Log Page 00:18:15.163 ================== 00:18:15.163 Generation Counter: 2 00:18:15.163 Number of Records: 2 00:18:15.163 Record Format: 0 00:18:15.163 00:18:15.163 Discovery Log Entry 0 00:18:15.163 ---------------------- 00:18:15.163 Transport Type: 3 (TCP) 00:18:15.163 Address Family: 1 (IPv4) 00:18:15.163 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:15.163 Entry Flags: 00:18:15.163 Duplicate Returned Information: 0 00:18:15.163 Explicit Persistent Connection Support for Discovery: 0 00:18:15.163 Transport Requirements: 00:18:15.163 Secure Channel: Not Specified 00:18:15.163 Port ID: 1 (0x0001) 00:18:15.163 Controller ID: 65535 (0xffff) 00:18:15.163 Admin Max SQ Size: 32 00:18:15.163 Transport Service Identifier: 4420 00:18:15.163 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:15.163 Transport Address: 10.0.0.1 00:18:15.163 Discovery Log Entry 1 00:18:15.163 ---------------------- 00:18:15.163 Transport Type: 3 (TCP) 00:18:15.163 Address Family: 1 (IPv4) 00:18:15.163 Subsystem Type: 2 (NVM Subsystem) 00:18:15.163 Entry Flags: 00:18:15.163 Duplicate Returned Information: 0 00:18:15.163 Explicit Persistent Connection Support for Discovery: 0 00:18:15.163 Transport Requirements: 00:18:15.163 Secure Channel: Not Specified 00:18:15.163 Port ID: 1 (0x0001) 00:18:15.163 Controller ID: 65535 (0xffff) 00:18:15.163 Admin Max SQ Size: 32 00:18:15.163 Transport Service Identifier: 4420 00:18:15.163 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:15.163 Transport Address: 10.0.0.1 00:18:15.163 13:37:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:15.423 get_feature(0x01) failed 00:18:15.423 get_feature(0x02) failed 00:18:15.423 get_feature(0x04) failed 00:18:15.423 ===================================================== 00:18:15.423 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:15.423 ===================================================== 00:18:15.423 Controller Capabilities/Features 00:18:15.423 ================================ 00:18:15.423 Vendor ID: 0000 00:18:15.423 Subsystem Vendor ID: 0000 00:18:15.423 Serial Number: 509e50cc3e39cbce6c28 00:18:15.423 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:15.423 Firmware Version: 6.8.9-20 00:18:15.423 Recommended Arb Burst: 6 00:18:15.423 IEEE OUI Identifier: 00 00 00 00:18:15.423 Multi-path I/O 00:18:15.423 May have multiple subsystem ports: Yes 00:18:15.423 May have multiple controllers: Yes 00:18:15.423 Associated with SR-IOV VF: No 00:18:15.423 Max Data Transfer Size: Unlimited 00:18:15.423 Max Number of Namespaces: 1024 00:18:15.423 Max Number of I/O Queues: 128 00:18:15.423 NVMe Specification Version (VS): 1.3 00:18:15.423 NVMe Specification Version (Identify): 1.3 00:18:15.423 Maximum Queue Entries: 1024 00:18:15.423 Contiguous Queues Required: No 00:18:15.423 Arbitration Mechanisms Supported 00:18:15.423 Weighted Round Robin: Not Supported 00:18:15.423 Vendor Specific: Not Supported 00:18:15.423 Reset Timeout: 7500 ms 00:18:15.423 Doorbell Stride: 4 bytes 00:18:15.423 NVM Subsystem Reset: Not Supported 00:18:15.423 Command Sets Supported 00:18:15.423 NVM Command Set: Supported 00:18:15.423 Boot Partition: Not Supported 00:18:15.423 Memory Page Size Minimum: 4096 bytes 00:18:15.423 Memory Page Size Maximum: 4096 bytes 00:18:15.423 Persistent Memory Region: Not Supported 00:18:15.423 Optional Asynchronous Events Supported 00:18:15.423 Namespace Attribute Notices: Supported 00:18:15.423 Firmware Activation Notices: Not Supported 00:18:15.423 ANA Change Notices: Supported 00:18:15.423 PLE Aggregate Log Change Notices: Not Supported 00:18:15.423 LBA Status Info Alert Notices: Not Supported 00:18:15.423 EGE Aggregate Log Change Notices: Not Supported 00:18:15.423 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.423 Zone Descriptor Change Notices: Not Supported 00:18:15.423 Discovery Log Change Notices: Not Supported 00:18:15.423 Controller Attributes 00:18:15.423 128-bit Host Identifier: Supported 00:18:15.423 Non-Operational Permissive Mode: Not Supported 00:18:15.423 NVM Sets: Not Supported 00:18:15.423 Read Recovery Levels: Not Supported 00:18:15.423 Endurance Groups: Not Supported 00:18:15.423 Predictable Latency Mode: Not Supported 00:18:15.423 Traffic Based Keep ALive: Supported 00:18:15.423 Namespace Granularity: Not Supported 00:18:15.423 SQ Associations: Not Supported 00:18:15.423 UUID List: Not Supported 00:18:15.423 Multi-Domain Subsystem: Not Supported 00:18:15.423 Fixed Capacity Management: Not Supported 00:18:15.423 Variable Capacity Management: Not Supported 00:18:15.423 Delete Endurance Group: Not Supported 00:18:15.423 Delete NVM Set: Not Supported 00:18:15.423 Extended LBA Formats Supported: Not Supported 00:18:15.423 Flexible Data Placement Supported: Not Supported 00:18:15.423 00:18:15.423 Controller Memory Buffer Support 00:18:15.423 ================================ 00:18:15.423 Supported: No 00:18:15.423 00:18:15.423 Persistent Memory Region Support 00:18:15.423 ================================ 00:18:15.423 Supported: No 00:18:15.423 00:18:15.423 Admin Command Set Attributes 00:18:15.423 ============================ 00:18:15.423 Security Send/Receive: Not Supported 00:18:15.423 Format NVM: Not Supported 00:18:15.423 Firmware Activate/Download: Not Supported 00:18:15.423 Namespace Management: Not Supported 00:18:15.423 Device Self-Test: Not Supported 00:18:15.423 Directives: Not Supported 00:18:15.423 NVMe-MI: Not Supported 00:18:15.423 Virtualization Management: Not Supported 00:18:15.423 Doorbell Buffer Config: Not Supported 00:18:15.423 Get LBA Status Capability: Not Supported 00:18:15.423 Command & Feature Lockdown Capability: Not Supported 00:18:15.423 Abort Command Limit: 4 00:18:15.423 Async Event Request Limit: 4 00:18:15.423 Number of Firmware Slots: N/A 00:18:15.423 Firmware Slot 1 Read-Only: N/A 00:18:15.423 Firmware Activation Without Reset: N/A 00:18:15.423 Multiple Update Detection Support: N/A 00:18:15.423 Firmware Update Granularity: No Information Provided 00:18:15.423 Per-Namespace SMART Log: Yes 00:18:15.423 Asymmetric Namespace Access Log Page: Supported 00:18:15.423 ANA Transition Time : 10 sec 00:18:15.423 00:18:15.423 Asymmetric Namespace Access Capabilities 00:18:15.423 ANA Optimized State : Supported 00:18:15.423 ANA Non-Optimized State : Supported 00:18:15.423 ANA Inaccessible State : Supported 00:18:15.423 ANA Persistent Loss State : Supported 00:18:15.423 ANA Change State : Supported 00:18:15.423 ANAGRPID is not changed : No 00:18:15.423 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:15.423 00:18:15.423 ANA Group Identifier Maximum : 128 00:18:15.423 Number of ANA Group Identifiers : 128 00:18:15.423 Max Number of Allowed Namespaces : 1024 00:18:15.423 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:15.423 Command Effects Log Page: Supported 00:18:15.423 Get Log Page Extended Data: Supported 00:18:15.423 Telemetry Log Pages: Not Supported 00:18:15.423 Persistent Event Log Pages: Not Supported 00:18:15.424 Supported Log Pages Log Page: May Support 00:18:15.424 Commands Supported & Effects Log Page: Not Supported 00:18:15.424 Feature Identifiers & Effects Log Page:May Support 00:18:15.424 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.424 Data Area 4 for Telemetry Log: Not Supported 00:18:15.424 Error Log Page Entries Supported: 128 00:18:15.424 Keep Alive: Supported 00:18:15.424 Keep Alive Granularity: 1000 ms 00:18:15.424 00:18:15.424 NVM Command Set Attributes 00:18:15.424 ========================== 00:18:15.424 Submission Queue Entry Size 00:18:15.424 Max: 64 00:18:15.424 Min: 64 00:18:15.424 Completion Queue Entry Size 00:18:15.424 Max: 16 00:18:15.424 Min: 16 00:18:15.424 Number of Namespaces: 1024 00:18:15.424 Compare Command: Not Supported 00:18:15.424 Write Uncorrectable Command: Not Supported 00:18:15.424 Dataset Management Command: Supported 00:18:15.424 Write Zeroes Command: Supported 00:18:15.424 Set Features Save Field: Not Supported 00:18:15.424 Reservations: Not Supported 00:18:15.424 Timestamp: Not Supported 00:18:15.424 Copy: Not Supported 00:18:15.424 Volatile Write Cache: Present 00:18:15.424 Atomic Write Unit (Normal): 1 00:18:15.424 Atomic Write Unit (PFail): 1 00:18:15.424 Atomic Compare & Write Unit: 1 00:18:15.424 Fused Compare & Write: Not Supported 00:18:15.424 Scatter-Gather List 00:18:15.424 SGL Command Set: Supported 00:18:15.424 SGL Keyed: Not Supported 00:18:15.424 SGL Bit Bucket Descriptor: Not Supported 00:18:15.424 SGL Metadata Pointer: Not Supported 00:18:15.424 Oversized SGL: Not Supported 00:18:15.424 SGL Metadata Address: Not Supported 00:18:15.424 SGL Offset: Supported 00:18:15.424 Transport SGL Data Block: Not Supported 00:18:15.424 Replay Protected Memory Block: Not Supported 00:18:15.424 00:18:15.424 Firmware Slot Information 00:18:15.424 ========================= 00:18:15.424 Active slot: 0 00:18:15.424 00:18:15.424 Asymmetric Namespace Access 00:18:15.424 =========================== 00:18:15.424 Change Count : 0 00:18:15.424 Number of ANA Group Descriptors : 1 00:18:15.424 ANA Group Descriptor : 0 00:18:15.424 ANA Group ID : 1 00:18:15.424 Number of NSID Values : 1 00:18:15.424 Change Count : 0 00:18:15.424 ANA State : 1 00:18:15.424 Namespace Identifier : 1 00:18:15.424 00:18:15.424 Commands Supported and Effects 00:18:15.424 ============================== 00:18:15.424 Admin Commands 00:18:15.424 -------------- 00:18:15.424 Get Log Page (02h): Supported 00:18:15.424 Identify (06h): Supported 00:18:15.424 Abort (08h): Supported 00:18:15.424 Set Features (09h): Supported 00:18:15.424 Get Features (0Ah): Supported 00:18:15.424 Asynchronous Event Request (0Ch): Supported 00:18:15.424 Keep Alive (18h): Supported 00:18:15.424 I/O Commands 00:18:15.424 ------------ 00:18:15.424 Flush (00h): Supported 00:18:15.424 Write (01h): Supported LBA-Change 00:18:15.424 Read (02h): Supported 00:18:15.424 Write Zeroes (08h): Supported LBA-Change 00:18:15.424 Dataset Management (09h): Supported 00:18:15.424 00:18:15.424 Error Log 00:18:15.424 ========= 00:18:15.424 Entry: 0 00:18:15.424 Error Count: 0x3 00:18:15.424 Submission Queue Id: 0x0 00:18:15.424 Command Id: 0x5 00:18:15.424 Phase Bit: 0 00:18:15.424 Status Code: 0x2 00:18:15.424 Status Code Type: 0x0 00:18:15.424 Do Not Retry: 1 00:18:15.424 Error Location: 0x28 00:18:15.424 LBA: 0x0 00:18:15.424 Namespace: 0x0 00:18:15.424 Vendor Log Page: 0x0 00:18:15.424 ----------- 00:18:15.424 Entry: 1 00:18:15.424 Error Count: 0x2 00:18:15.424 Submission Queue Id: 0x0 00:18:15.424 Command Id: 0x5 00:18:15.424 Phase Bit: 0 00:18:15.424 Status Code: 0x2 00:18:15.424 Status Code Type: 0x0 00:18:15.424 Do Not Retry: 1 00:18:15.424 Error Location: 0x28 00:18:15.424 LBA: 0x0 00:18:15.424 Namespace: 0x0 00:18:15.424 Vendor Log Page: 0x0 00:18:15.424 ----------- 00:18:15.424 Entry: 2 00:18:15.424 Error Count: 0x1 00:18:15.424 Submission Queue Id: 0x0 00:18:15.424 Command Id: 0x4 00:18:15.424 Phase Bit: 0 00:18:15.424 Status Code: 0x2 00:18:15.424 Status Code Type: 0x0 00:18:15.424 Do Not Retry: 1 00:18:15.424 Error Location: 0x28 00:18:15.424 LBA: 0x0 00:18:15.424 Namespace: 0x0 00:18:15.424 Vendor Log Page: 0x0 00:18:15.424 00:18:15.424 Number of Queues 00:18:15.424 ================ 00:18:15.424 Number of I/O Submission Queues: 128 00:18:15.424 Number of I/O Completion Queues: 128 00:18:15.424 00:18:15.424 ZNS Specific Controller Data 00:18:15.424 ============================ 00:18:15.424 Zone Append Size Limit: 0 00:18:15.424 00:18:15.424 00:18:15.424 Active Namespaces 00:18:15.424 ================= 00:18:15.424 get_feature(0x05) failed 00:18:15.424 Namespace ID:1 00:18:15.424 Command Set Identifier: NVM (00h) 00:18:15.424 Deallocate: Supported 00:18:15.424 Deallocated/Unwritten Error: Not Supported 00:18:15.424 Deallocated Read Value: Unknown 00:18:15.424 Deallocate in Write Zeroes: Not Supported 00:18:15.424 Deallocated Guard Field: 0xFFFF 00:18:15.424 Flush: Supported 00:18:15.424 Reservation: Not Supported 00:18:15.424 Namespace Sharing Capabilities: Multiple Controllers 00:18:15.424 Size (in LBAs): 1310720 (5GiB) 00:18:15.424 Capacity (in LBAs): 1310720 (5GiB) 00:18:15.424 Utilization (in LBAs): 1310720 (5GiB) 00:18:15.424 UUID: 32207189-f1e3-4cd6-b9f0-ea1ccaa3d324 00:18:15.424 Thin Provisioning: Not Supported 00:18:15.424 Per-NS Atomic Units: Yes 00:18:15.424 Atomic Boundary Size (Normal): 0 00:18:15.424 Atomic Boundary Size (PFail): 0 00:18:15.424 Atomic Boundary Offset: 0 00:18:15.424 NGUID/EUI64 Never Reused: No 00:18:15.424 ANA group ID: 1 00:18:15.424 Namespace Write Protected: No 00:18:15.424 Number of LBA Formats: 1 00:18:15.424 Current LBA Format: LBA Format #00 00:18:15.424 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:15.424 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.424 rmmod nvme_tcp 00:18:15.424 rmmod nvme_fabrics 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:15.424 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:15.682 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:15.683 13:37:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:16.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.617 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.617 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:16.617 00:18:16.617 real 0m3.232s 00:18:16.617 user 0m1.125s 00:18:16.617 sys 0m1.489s 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.617 ************************************ 00:18:16.617 END TEST nvmf_identify_kernel_target 00:18:16.617 ************************************ 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.617 ************************************ 00:18:16.617 START TEST nvmf_auth_host 00:18:16.617 ************************************ 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:16.617 * Looking for test storage... 00:18:16.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:16.617 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.876 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:16.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.876 --rc genhtml_branch_coverage=1 00:18:16.876 --rc genhtml_function_coverage=1 00:18:16.876 --rc genhtml_legend=1 00:18:16.876 --rc geninfo_all_blocks=1 00:18:16.876 --rc geninfo_unexecuted_blocks=1 00:18:16.876 00:18:16.877 ' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:16.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.877 --rc genhtml_branch_coverage=1 00:18:16.877 --rc genhtml_function_coverage=1 00:18:16.877 --rc genhtml_legend=1 00:18:16.877 --rc geninfo_all_blocks=1 00:18:16.877 --rc geninfo_unexecuted_blocks=1 00:18:16.877 00:18:16.877 ' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:16.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.877 --rc genhtml_branch_coverage=1 00:18:16.877 --rc genhtml_function_coverage=1 00:18:16.877 --rc genhtml_legend=1 00:18:16.877 --rc geninfo_all_blocks=1 00:18:16.877 --rc geninfo_unexecuted_blocks=1 00:18:16.877 00:18:16.877 ' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:16.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.877 --rc genhtml_branch_coverage=1 00:18:16.877 --rc genhtml_function_coverage=1 00:18:16.877 --rc genhtml_legend=1 00:18:16.877 --rc geninfo_all_blocks=1 00:18:16.877 --rc geninfo_unexecuted_blocks=1 00:18:16.877 00:18:16.877 ' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.877 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.877 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:16.878 Cannot find device "nvmf_init_br" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:16.878 Cannot find device "nvmf_init_br2" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:16.878 Cannot find device "nvmf_tgt_br" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.878 Cannot find device "nvmf_tgt_br2" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:16.878 Cannot find device "nvmf_init_br" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:16.878 Cannot find device "nvmf_init_br2" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:16.878 Cannot find device "nvmf_tgt_br" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:16.878 Cannot find device "nvmf_tgt_br2" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:16.878 Cannot find device "nvmf_br" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:16.878 Cannot find device "nvmf_init_if" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:16.878 Cannot find device "nvmf_init_if2" 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:16.878 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.136 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.136 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:17.136 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:17.137 13:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:17.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:18:17.137 00:18:17.137 --- 10.0.0.3 ping statistics --- 00:18:17.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.137 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:17.137 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:17.137 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:17.137 00:18:17.137 --- 10.0.0.4 ping statistics --- 00:18:17.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.137 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:17.137 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:17.396 00:18:17.396 --- 10.0.0.1 ping statistics --- 00:18:17.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.396 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:17.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:17.396 00:18:17.396 --- 10.0.0.2 ping statistics --- 00:18:17.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.396 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78817 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78817 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78817 ']' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.396 13:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b70f81e8ac1bb7df9c417435aebd83f4 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VmJ 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b70f81e8ac1bb7df9c417435aebd83f4 0 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b70f81e8ac1bb7df9c417435aebd83f4 0 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b70f81e8ac1bb7df9c417435aebd83f4 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VmJ 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VmJ 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.VmJ 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a2c1d1425ecffb44f27a2d03ca7610936608d02a4fcceebfd6f6e5474933be8 00:18:18.333 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QWA 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a2c1d1425ecffb44f27a2d03ca7610936608d02a4fcceebfd6f6e5474933be8 3 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a2c1d1425ecffb44f27a2d03ca7610936608d02a4fcceebfd6f6e5474933be8 3 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a2c1d1425ecffb44f27a2d03ca7610936608d02a4fcceebfd6f6e5474933be8 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QWA 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QWA 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QWA 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5822eac1e9d70a83ea9a7ba56efc88848e218d2e10615e1f 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ytc 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5822eac1e9d70a83ea9a7ba56efc88848e218d2e10615e1f 0 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5822eac1e9d70a83ea9a7ba56efc88848e218d2e10615e1f 0 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5822eac1e9d70a83ea9a7ba56efc88848e218d2e10615e1f 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ytc 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ytc 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ytc 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3995ce8d94108e06b61256697b66255e34cfb42e79749695 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yH5 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3995ce8d94108e06b61256697b66255e34cfb42e79749695 2 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3995ce8d94108e06b61256697b66255e34cfb42e79749695 2 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3995ce8d94108e06b61256697b66255e34cfb42e79749695 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yH5 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yH5 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yH5 00:18:18.592 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c945d1368aa495f03378982bb69c5f04 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.stn 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c945d1368aa495f03378982bb69c5f04 1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c945d1368aa495f03378982bb69c5f04 1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c945d1368aa495f03378982bb69c5f04 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.stn 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.stn 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.stn 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68467118d333ab98b5e95f943dc77bf6 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wrv 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68467118d333ab98b5e95f943dc77bf6 1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68467118d333ab98b5e95f943dc77bf6 1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68467118d333ab98b5e95f943dc77bf6 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:18.593 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wrv 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wrv 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wrv 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cc02adb9c5438aa0dab8bcbe9aa802354cf1db4ee8232ca5 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.w8e 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cc02adb9c5438aa0dab8bcbe9aa802354cf1db4ee8232ca5 2 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cc02adb9c5438aa0dab8bcbe9aa802354cf1db4ee8232ca5 2 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cc02adb9c5438aa0dab8bcbe9aa802354cf1db4ee8232ca5 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.w8e 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.w8e 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.w8e 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79f6357af0023b01d0432ec3a4e144a0 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IjY 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79f6357af0023b01d0432ec3a4e144a0 0 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79f6357af0023b01d0432ec3a4e144a0 0 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79f6357af0023b01d0432ec3a4e144a0 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IjY 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IjY 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IjY 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4440b9d72506651861a60b14aa4a48c789cf97eb90fd02131483a011addadb45 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZvB 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4440b9d72506651861a60b14aa4a48c789cf97eb90fd02131483a011addadb45 3 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4440b9d72506651861a60b14aa4a48c789cf97eb90fd02131483a011addadb45 3 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4440b9d72506651861a60b14aa4a48c789cf97eb90fd02131483a011addadb45 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZvB 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZvB 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZvB 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78817 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78817 ']' 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.852 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.853 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.853 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.853 13:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VmJ 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QWA ]] 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QWA 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ytc 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.111 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yH5 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yH5 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.stn 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wrv ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wrv 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.w8e 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IjY ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IjY 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.370 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZvB 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:19.371 13:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.721 Waiting for block devices as requested 00:18:19.721 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:20.033 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:20.292 No valid GPT data, bailing 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:20.292 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:20.551 No valid GPT data, bailing 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:20.551 No valid GPT data, bailing 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.551 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:20.552 No valid GPT data, bailing 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -a 10.0.0.1 -t tcp -s 4420 00:18:20.552 00:18:20.552 Discovery Log Number of Records 2, Generation counter 2 00:18:20.552 =====Discovery Log Entry 0====== 00:18:20.552 trtype: tcp 00:18:20.552 adrfam: ipv4 00:18:20.552 subtype: current discovery subsystem 00:18:20.552 treq: not specified, sq flow control disable supported 00:18:20.552 portid: 1 00:18:20.552 trsvcid: 4420 00:18:20.552 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.552 traddr: 10.0.0.1 00:18:20.552 eflags: none 00:18:20.552 sectype: none 00:18:20.552 =====Discovery Log Entry 1====== 00:18:20.552 trtype: tcp 00:18:20.552 adrfam: ipv4 00:18:20.552 subtype: nvme subsystem 00:18:20.552 treq: not specified, sq flow control disable supported 00:18:20.552 portid: 1 00:18:20.552 trsvcid: 4420 00:18:20.552 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:20.552 traddr: 10.0.0.1 00:18:20.552 eflags: none 00:18:20.552 sectype: none 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.552 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.810 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.811 nvme0n1 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.811 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 nvme0n1 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:21.070 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.071 13:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 nvme0n1 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 nvme0n1 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 nvme0n1 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.589 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.590 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 nvme0n1 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.848 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.107 13:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.366 nvme0n1 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.366 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.367 nvme0n1 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.367 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.626 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.627 nvme0n1 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.627 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 nvme0n1 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.886 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.887 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.146 nvme0n1 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.146 13:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.146 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.713 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.714 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.714 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.714 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.714 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.714 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.972 nvme0n1 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.972 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.973 13:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 nvme0n1 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.232 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.490 nvme0n1 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.490 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.749 nvme0n1 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.749 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.007 nvme0n1 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.007 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.008 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.266 13:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.167 13:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.167 nvme0n1 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.167 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.424 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.683 nvme0n1 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.683 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 nvme0n1 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.249 13:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.249 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.507 nvme0n1 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.507 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.765 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 nvme0n1 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.062 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.063 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.063 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.063 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.063 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.063 13:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.638 nvme0n1 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.638 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.896 13:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.462 nvme0n1 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.462 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.029 nvme0n1 00:18:31.029 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.029 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.029 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.029 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.029 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.030 13:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.288 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.855 nvme0n1 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.855 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.856 13:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 nvme0n1 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 nvme0n1 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:32.791 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.792 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.093 nvme0n1 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 nvme0n1 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 13:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.094 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.353 nvme0n1 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.353 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.354 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 nvme0n1 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 nvme0n1 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.613 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.874 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 nvme0n1 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.875 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.134 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.135 nvme0n1 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.135 13:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.135 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.395 nvme0n1 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.395 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.396 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.654 nvme0n1 00:18:34.654 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.654 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.655 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.914 nvme0n1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.914 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.173 nvme0n1 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 13:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.174 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 nvme0n1 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.433 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.692 nvme0n1 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.692 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 nvme0n1 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.951 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.210 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.210 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.210 13:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.469 nvme0n1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.469 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.037 nvme0n1 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.037 13:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.296 nvme0n1 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:37.296 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.297 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.297 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.297 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.555 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 nvme0n1 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.814 13:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.382 nvme0n1 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.382 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.383 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.952 nvme0n1 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.952 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.211 13:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 nvme0n1 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.779 13:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.345 nvme0n1 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.345 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.604 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.170 nvme0n1 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.170 13:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.171 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.737 nvme0n1 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.737 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.996 nvme0n1 00:18:41.996 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.997 13:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.255 nvme0n1 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.255 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.514 nvme0n1 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.514 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.515 nvme0n1 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.515 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.774 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.775 nvme0n1 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.775 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.034 nvme0n1 00:18:43.034 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.034 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.034 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.034 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.035 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 nvme0n1 00:18:43.294 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.294 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.294 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.294 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.294 13:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 nvme0n1 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.294 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.553 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 nvme0n1 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.554 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.812 nvme0n1 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.812 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.813 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 nvme0n1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.071 13:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 nvme0n1 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.397 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.656 nvme0n1 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.656 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.915 nvme0n1 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.915 13:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.173 nvme0n1 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.173 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.431 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.432 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.432 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.432 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.690 nvme0n1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.690 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.256 nvme0n1 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.257 13:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.257 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 nvme0n1 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.515 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.516 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.081 nvme0n1 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.081 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 13:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 nvme0n1 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:47.357 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjcwZjgxZThhYzFiYjdkZjljNDE3NDM1YWViZDgzZjQ1Ck4/: 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: ]] 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGEyYzFkMTQyNWVjZmZiNDRmMjdhMmQwM2NhNzYxMDkzNjYwOGQwMmE0ZmNjZWViZmQ2ZjZlNTQ3NDkzM2JlOKLyO28=: 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.358 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 nvme0n1 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:48.290 13:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.290 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 nvme0n1 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.881 13:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.449 nvme0n1 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.449 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2MwMmFkYjljNTQzOGFhMGRhYjhiY2JlOWFhODAyMzU0Y2YxZGI0ZWU4MjMyY2E119qVjw==: 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlmNjM1N2FmMDAyM2IwMWQwNDMyZWMzYTRlMTQ0YTD9Yjac: 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.450 13:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.383 nvme0n1 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDQ0MGI5ZDcyNTA2NjUxODYxYTYwYjE0YWE0YTQ4Yzc4OWNmOTdlYjkwZmQwMjEzMTQ4M2EwMTFhZGRhZGI0NWBCmpU=: 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:50.383 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.384 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.960 nvme0n1 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:50.960 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.961 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 request: 00:18:50.962 { 00:18:50.962 "name": "nvme0", 00:18:50.962 "trtype": "tcp", 00:18:50.962 "traddr": "10.0.0.1", 00:18:50.962 "adrfam": "ipv4", 00:18:50.962 "trsvcid": "4420", 00:18:50.962 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:50.962 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:50.962 "prchk_reftag": false, 00:18:50.962 "prchk_guard": false, 00:18:50.962 "hdgst": false, 00:18:50.962 "ddgst": false, 00:18:50.962 "allow_unrecognized_csi": false, 00:18:50.962 "method": "bdev_nvme_attach_controller", 00:18:50.962 "req_id": 1 00:18:50.962 } 00:18:50.962 Got JSON-RPC error response 00:18:50.962 response: 00:18:50.962 { 00:18:50.962 "code": -5, 00:18:50.962 "message": "Input/output error" 00:18:50.962 } 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.962 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.963 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.963 request: 00:18:50.963 { 00:18:50.963 "name": "nvme0", 00:18:50.963 "trtype": "tcp", 00:18:50.963 "traddr": "10.0.0.1", 00:18:50.963 "adrfam": "ipv4", 00:18:50.965 "trsvcid": "4420", 00:18:50.966 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:50.966 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:50.966 "prchk_reftag": false, 00:18:50.966 "prchk_guard": false, 00:18:50.966 "hdgst": false, 00:18:50.966 "ddgst": false, 00:18:50.966 "dhchap_key": "key2", 00:18:50.966 "allow_unrecognized_csi": false, 00:18:50.966 "method": "bdev_nvme_attach_controller", 00:18:50.966 "req_id": 1 00:18:50.966 } 00:18:50.966 Got JSON-RPC error response 00:18:50.966 response: 00:18:50.966 { 00:18:50.966 "code": -5, 00:18:50.966 "message": "Input/output error" 00:18:50.966 } 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.966 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.227 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.228 request: 00:18:51.228 { 00:18:51.228 "name": "nvme0", 00:18:51.228 "trtype": "tcp", 00:18:51.228 "traddr": "10.0.0.1", 00:18:51.228 "adrfam": "ipv4", 00:18:51.228 "trsvcid": "4420", 00:18:51.228 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:51.228 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:51.228 "prchk_reftag": false, 00:18:51.228 "prchk_guard": false, 00:18:51.228 "hdgst": false, 00:18:51.228 "ddgst": false, 00:18:51.228 "dhchap_key": "key1", 00:18:51.228 "dhchap_ctrlr_key": "ckey2", 00:18:51.228 "allow_unrecognized_csi": false, 00:18:51.228 "method": "bdev_nvme_attach_controller", 00:18:51.228 "req_id": 1 00:18:51.228 } 00:18:51.228 Got JSON-RPC error response 00:18:51.228 response: 00:18:51.228 { 00:18:51.228 "code": -5, 00:18:51.228 "message": "Input/output error" 00:18:51.228 } 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.228 13:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.228 nvme0n1 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.228 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.487 request: 00:18:51.487 { 00:18:51.487 "name": "nvme0", 00:18:51.487 "dhchap_key": "key1", 00:18:51.487 "dhchap_ctrlr_key": "ckey2", 00:18:51.487 "method": "bdev_nvme_set_keys", 00:18:51.487 "req_id": 1 00:18:51.487 } 00:18:51.487 Got JSON-RPC error response 00:18:51.487 response: 00:18:51.487 { 00:18:51.487 "code": -13, 00:18:51.487 "message": "Permission denied" 00:18:51.487 } 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:51.487 13:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:52.421 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgyMmVhYzFlOWQ3MGE4M2VhOWE3YmE1NmVmYzg4ODQ4ZTIxOGQyZTEwNjE1ZTFmSeNnqw==: 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: ]] 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzk5NWNlOGQ5NDEwOGUwNmI2MTI1NjY5N2I2NjI1NWUzNGNmYjQyZTc5NzQ5Njk1oeTDyQ==: 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.422 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.680 nvme0n1 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk0NWQxMzY4YWE0OTVmMDMzNzg5ODJiYjY5YzVmMDRJGHNB: 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: ]] 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njg0NjcxMThkMzMzYWI5OGI1ZTk1Zjk0M2RjNzdiZjbb7iW4: 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:52.680 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.681 request: 00:18:52.681 { 00:18:52.681 "name": "nvme0", 00:18:52.681 "dhchap_key": "key2", 00:18:52.681 "dhchap_ctrlr_key": "ckey1", 00:18:52.681 "method": "bdev_nvme_set_keys", 00:18:52.681 "req_id": 1 00:18:52.681 } 00:18:52.681 Got JSON-RPC error response 00:18:52.681 response: 00:18:52.681 { 00:18:52.681 "code": -13, 00:18:52.681 "message": "Permission denied" 00:18:52.681 } 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:52.681 13:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.617 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.618 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.618 rmmod nvme_tcp 00:18:53.876 rmmod nvme_fabrics 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78817 ']' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78817 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78817 ']' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78817 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78817 00:18:53.876 killing process with pid 78817 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78817' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78817 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78817 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.876 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:53.877 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.135 13:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:54.135 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:54.393 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:54.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:54.959 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:54.959 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:55.219 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.VmJ /tmp/spdk.key-null.ytc /tmp/spdk.key-sha256.stn /tmp/spdk.key-sha384.w8e /tmp/spdk.key-sha512.ZvB /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:55.219 13:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:55.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:55.478 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:55.478 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:55.478 00:18:55.478 real 0m38.848s 00:18:55.478 user 0m35.011s 00:18:55.478 sys 0m3.884s 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.478 ************************************ 00:18:55.478 END TEST nvmf_auth_host 00:18:55.478 ************************************ 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.478 ************************************ 00:18:55.478 START TEST nvmf_digest 00:18:55.478 ************************************ 00:18:55.478 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:55.738 * Looking for test storage... 00:18:55.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.738 --rc genhtml_branch_coverage=1 00:18:55.738 --rc genhtml_function_coverage=1 00:18:55.738 --rc genhtml_legend=1 00:18:55.738 --rc geninfo_all_blocks=1 00:18:55.738 --rc geninfo_unexecuted_blocks=1 00:18:55.738 00:18:55.738 ' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.738 --rc genhtml_branch_coverage=1 00:18:55.738 --rc genhtml_function_coverage=1 00:18:55.738 --rc genhtml_legend=1 00:18:55.738 --rc geninfo_all_blocks=1 00:18:55.738 --rc geninfo_unexecuted_blocks=1 00:18:55.738 00:18:55.738 ' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.738 --rc genhtml_branch_coverage=1 00:18:55.738 --rc genhtml_function_coverage=1 00:18:55.738 --rc genhtml_legend=1 00:18:55.738 --rc geninfo_all_blocks=1 00:18:55.738 --rc geninfo_unexecuted_blocks=1 00:18:55.738 00:18:55.738 ' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.738 --rc genhtml_branch_coverage=1 00:18:55.738 --rc genhtml_function_coverage=1 00:18:55.738 --rc genhtml_legend=1 00:18:55.738 --rc geninfo_all_blocks=1 00:18:55.738 --rc geninfo_unexecuted_blocks=1 00:18:55.738 00:18:55.738 ' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.738 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:55.739 Cannot find device "nvmf_init_br" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:55.739 Cannot find device "nvmf_init_br2" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:55.739 Cannot find device "nvmf_tgt_br" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.739 Cannot find device "nvmf_tgt_br2" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:55.739 Cannot find device "nvmf_init_br" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:55.739 Cannot find device "nvmf_init_br2" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:55.739 Cannot find device "nvmf_tgt_br" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:55.739 Cannot find device "nvmf_tgt_br2" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:55.739 Cannot find device "nvmf_br" 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:55.739 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:55.739 Cannot find device "nvmf_init_if" 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:55.998 Cannot find device "nvmf_init_if2" 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.998 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:55.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:18:55.999 00:18:55.999 --- 10.0.0.3 ping statistics --- 00:18:55.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.999 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:55.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:55.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:55.999 00:18:55.999 --- 10.0.0.4 ping statistics --- 00:18:55.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.999 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:55.999 00:18:55.999 --- 10.0.0.1 ping statistics --- 00:18:55.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.999 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:55.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:55.999 00:18:55.999 --- 10.0.0.2 ping statistics --- 00:18:55.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.999 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.999 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:56.257 ************************************ 00:18:56.257 START TEST nvmf_digest_clean 00:18:56.257 ************************************ 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80486 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80486 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80486 ']' 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.257 13:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:56.257 [2024-11-20 13:38:08.041319] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:56.257 [2024-11-20 13:38:08.041421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.257 [2024-11-20 13:38:08.192692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.515 [2024-11-20 13:38:08.256586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.515 [2024-11-20 13:38:08.256647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.515 [2024-11-20 13:38:08.256660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.515 [2024-11-20 13:38:08.256668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.515 [2024-11-20 13:38:08.256676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.515 [2024-11-20 13:38:08.257096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.104 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.104 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:57.104 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.104 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.104 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:57.374 [2024-11-20 13:38:09.132291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.374 null0 00:18:57.374 [2024-11-20 13:38:09.188739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.374 [2024-11-20 13:38:09.212927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:57.374 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80518 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80518 /var/tmp/bperf.sock 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80518 ']' 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:57.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.375 13:38:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:57.375 [2024-11-20 13:38:09.281704] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:57.375 [2024-11-20 13:38:09.282122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80518 ] 00:18:57.635 [2024-11-20 13:38:09.437873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.635 [2024-11-20 13:38:09.509474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.571 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.571 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:58.571 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:58.571 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:58.571 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:58.829 [2024-11-20 13:38:10.555601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:58.829 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:58.829 13:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.086 nvme0n1 00:18:59.345 13:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:59.345 13:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:59.345 Running I/O for 2 seconds... 00:19:01.658 14224.00 IOPS, 55.56 MiB/s [2024-11-20T13:38:13.615Z] 14414.50 IOPS, 56.31 MiB/s 00:19:01.658 Latency(us) 00:19:01.658 [2024-11-20T13:38:13.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:01.658 nvme0n1 : 2.01 14443.88 56.42 0.00 0.00 8855.26 8519.68 24784.52 00:19:01.658 [2024-11-20T13:38:13.615Z] =================================================================================================================== 00:19:01.658 [2024-11-20T13:38:13.615Z] Total : 14443.88 56.42 0.00 0.00 8855.26 8519.68 24784.52 00:19:01.658 { 00:19:01.658 "results": [ 00:19:01.658 { 00:19:01.658 "job": "nvme0n1", 00:19:01.658 "core_mask": "0x2", 00:19:01.658 "workload": "randread", 00:19:01.658 "status": "finished", 00:19:01.658 "queue_depth": 128, 00:19:01.658 "io_size": 4096, 00:19:01.658 "runtime": 2.013587, 00:19:01.658 "iops": 14443.875531576237, 00:19:01.658 "mibps": 56.42138879521968, 00:19:01.658 "io_failed": 0, 00:19:01.658 "io_timeout": 0, 00:19:01.658 "avg_latency_us": 8855.26408522024, 00:19:01.658 "min_latency_us": 8519.68, 00:19:01.658 "max_latency_us": 24784.523636363636 00:19:01.658 } 00:19:01.658 ], 00:19:01.658 "core_count": 1 00:19:01.658 } 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:01.658 | select(.opcode=="crc32c") 00:19:01.658 | "\(.module_name) \(.executed)"' 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80518 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80518 ']' 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80518 00:19:01.658 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80518 00:19:01.659 killing process with pid 80518 00:19:01.659 Received shutdown signal, test time was about 2.000000 seconds 00:19:01.659 00:19:01.659 Latency(us) 00:19:01.659 [2024-11-20T13:38:13.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.659 [2024-11-20T13:38:13.616Z] =================================================================================================================== 00:19:01.659 [2024-11-20T13:38:13.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80518' 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80518 00:19:01.659 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80518 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80584 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80584 /var/tmp/bperf.sock 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80584 ']' 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:01.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.917 13:38:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:01.917 [2024-11-20 13:38:13.765638] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:01.917 [2024-11-20 13:38:13.765970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80584 ] 00:19:01.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:01.917 Zero copy mechanism will not be used. 00:19:02.177 [2024-11-20 13:38:13.918972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.177 [2024-11-20 13:38:13.996659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.113 13:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.113 13:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:03.113 13:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:03.113 13:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:03.113 13:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:03.371 [2024-11-20 13:38:15.116822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.371 13:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.371 13:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.629 nvme0n1 00:19:03.629 13:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:03.629 13:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:03.887 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:03.887 Zero copy mechanism will not be used. 00:19:03.887 Running I/O for 2 seconds... 00:19:05.863 7184.00 IOPS, 898.00 MiB/s [2024-11-20T13:38:17.820Z] 7208.00 IOPS, 901.00 MiB/s 00:19:05.863 Latency(us) 00:19:05.863 [2024-11-20T13:38:17.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.863 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:05.863 nvme0n1 : 2.00 7205.05 900.63 0.00 0.00 2217.01 2085.24 6166.34 00:19:05.863 [2024-11-20T13:38:17.820Z] =================================================================================================================== 00:19:05.863 [2024-11-20T13:38:17.820Z] Total : 7205.05 900.63 0.00 0.00 2217.01 2085.24 6166.34 00:19:05.863 { 00:19:05.863 "results": [ 00:19:05.863 { 00:19:05.863 "job": "nvme0n1", 00:19:05.863 "core_mask": "0x2", 00:19:05.863 "workload": "randread", 00:19:05.863 "status": "finished", 00:19:05.863 "queue_depth": 16, 00:19:05.863 "io_size": 131072, 00:19:05.863 "runtime": 2.00304, 00:19:05.863 "iops": 7205.048326543653, 00:19:05.863 "mibps": 900.6310408179567, 00:19:05.863 "io_failed": 0, 00:19:05.863 "io_timeout": 0, 00:19:05.863 "avg_latency_us": 2217.014182624471, 00:19:05.863 "min_latency_us": 2085.2363636363634, 00:19:05.863 "max_latency_us": 6166.341818181818 00:19:05.863 } 00:19:05.863 ], 00:19:05.863 "core_count": 1 00:19:05.863 } 00:19:05.863 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:05.863 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:05.863 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:05.863 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:05.863 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:05.863 | select(.opcode=="crc32c") 00:19:05.863 | "\(.module_name) \(.executed)"' 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80584 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80584 ']' 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80584 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80584 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.122 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.122 killing process with pid 80584 00:19:06.122 Received shutdown signal, test time was about 2.000000 seconds 00:19:06.122 00:19:06.122 Latency(us) 00:19:06.122 [2024-11-20T13:38:18.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.122 [2024-11-20T13:38:18.079Z] =================================================================================================================== 00:19:06.122 [2024-11-20T13:38:18.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.123 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80584' 00:19:06.123 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80584 00:19:06.123 13:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80584 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80644 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80644 /var/tmp/bperf.sock 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80644 ']' 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:06.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.381 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:06.381 [2024-11-20 13:38:18.259360] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:06.381 [2024-11-20 13:38:18.259725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80644 ] 00:19:06.640 [2024-11-20 13:38:18.407394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.640 [2024-11-20 13:38:18.471237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.640 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.640 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:06.640 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:06.640 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:06.640 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:07.206 [2024-11-20 13:38:18.855159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:07.206 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:07.206 13:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:07.464 nvme0n1 00:19:07.464 13:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:07.464 13:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:07.722 Running I/O for 2 seconds... 00:19:09.628 15241.00 IOPS, 59.54 MiB/s [2024-11-20T13:38:21.585Z] 15304.00 IOPS, 59.78 MiB/s 00:19:09.628 Latency(us) 00:19:09.628 [2024-11-20T13:38:21.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.628 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.628 nvme0n1 : 2.01 15344.52 59.94 0.00 0.00 8333.41 7983.48 17754.30 00:19:09.628 [2024-11-20T13:38:21.585Z] =================================================================================================================== 00:19:09.628 [2024-11-20T13:38:21.585Z] Total : 15344.52 59.94 0.00 0.00 8333.41 7983.48 17754.30 00:19:09.628 { 00:19:09.628 "results": [ 00:19:09.628 { 00:19:09.628 "job": "nvme0n1", 00:19:09.628 "core_mask": "0x2", 00:19:09.628 "workload": "randwrite", 00:19:09.628 "status": "finished", 00:19:09.628 "queue_depth": 128, 00:19:09.628 "io_size": 4096, 00:19:09.628 "runtime": 2.011337, 00:19:09.628 "iops": 15344.519590700116, 00:19:09.628 "mibps": 59.93952965117233, 00:19:09.628 "io_failed": 0, 00:19:09.628 "io_timeout": 0, 00:19:09.628 "avg_latency_us": 8333.41300256559, 00:19:09.628 "min_latency_us": 7983.476363636363, 00:19:09.628 "max_latency_us": 17754.298181818183 00:19:09.628 } 00:19:09.628 ], 00:19:09.628 "core_count": 1 00:19:09.628 } 00:19:09.628 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:09.628 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:09.628 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:09.628 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:09.628 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:09.628 | select(.opcode=="crc32c") 00:19:09.628 | "\(.module_name) \(.executed)"' 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80644 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80644 ']' 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80644 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80644 00:19:09.887 killing process with pid 80644 00:19:09.887 Received shutdown signal, test time was about 2.000000 seconds 00:19:09.887 00:19:09.887 Latency(us) 00:19:09.887 [2024-11-20T13:38:21.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.887 [2024-11-20T13:38:21.844Z] =================================================================================================================== 00:19:09.887 [2024-11-20T13:38:21.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80644' 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80644 00:19:09.887 13:38:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80644 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80698 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80698 /var/tmp/bperf.sock 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80698 ']' 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:10.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.146 13:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.146 [2024-11-20 13:38:22.062667] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:10.146 [2024-11-20 13:38:22.063000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:10.146 Zero copy mechanism will not be used. 00:19:10.146 llocations --file-prefix=spdk_pid80698 ] 00:19:10.404 [2024-11-20 13:38:22.217092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.404 [2024-11-20 13:38:22.311589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.339 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.339 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:11.339 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:11.339 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:11.339 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:11.599 [2024-11-20 13:38:23.426204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:11.599 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.599 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.857 nvme0n1 00:19:12.116 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:12.116 13:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:12.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:12.116 Zero copy mechanism will not be used. 00:19:12.116 Running I/O for 2 seconds... 00:19:13.988 6284.00 IOPS, 785.50 MiB/s [2024-11-20T13:38:25.945Z] 6256.50 IOPS, 782.06 MiB/s 00:19:13.988 Latency(us) 00:19:13.988 [2024-11-20T13:38:25.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.988 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:13.988 nvme0n1 : 2.00 6254.67 781.83 0.00 0.00 2551.95 1578.82 5183.30 00:19:13.988 [2024-11-20T13:38:25.945Z] =================================================================================================================== 00:19:13.988 [2024-11-20T13:38:25.945Z] Total : 6254.67 781.83 0.00 0.00 2551.95 1578.82 5183.30 00:19:13.988 { 00:19:13.988 "results": [ 00:19:13.988 { 00:19:13.988 "job": "nvme0n1", 00:19:13.988 "core_mask": "0x2", 00:19:13.988 "workload": "randwrite", 00:19:13.988 "status": "finished", 00:19:13.988 "queue_depth": 16, 00:19:13.988 "io_size": 131072, 00:19:13.988 "runtime": 2.004423, 00:19:13.989 "iops": 6254.667802155533, 00:19:13.989 "mibps": 781.8334752694416, 00:19:13.989 "io_failed": 0, 00:19:13.989 "io_timeout": 0, 00:19:13.989 "avg_latency_us": 2551.9469717998363, 00:19:13.989 "min_latency_us": 1578.8218181818181, 00:19:13.989 "max_latency_us": 5183.301818181818 00:19:13.989 } 00:19:13.989 ], 00:19:13.989 "core_count": 1 00:19:13.989 } 00:19:14.247 13:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:14.247 13:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:14.247 13:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:14.247 13:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:14.247 13:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:14.247 | select(.opcode=="crc32c") 00:19:14.247 | "\(.module_name) \(.executed)"' 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80698 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80698 ']' 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80698 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80698 00:19:14.506 killing process with pid 80698 00:19:14.506 Received shutdown signal, test time was about 2.000000 seconds 00:19:14.506 00:19:14.506 Latency(us) 00:19:14.506 [2024-11-20T13:38:26.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.506 [2024-11-20T13:38:26.463Z] =================================================================================================================== 00:19:14.506 [2024-11-20T13:38:26.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80698' 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80698 00:19:14.506 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80698 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80486 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80486 ']' 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80486 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80486 00:19:14.765 killing process with pid 80486 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80486' 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80486 00:19:14.765 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80486 00:19:15.023 ************************************ 00:19:15.023 END TEST nvmf_digest_clean 00:19:15.023 ************************************ 00:19:15.023 00:19:15.023 real 0m18.776s 00:19:15.023 user 0m37.044s 00:19:15.023 sys 0m4.665s 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:15.023 ************************************ 00:19:15.023 START TEST nvmf_digest_error 00:19:15.023 ************************************ 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80784 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80784 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80784 ']' 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.023 13:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.023 [2024-11-20 13:38:26.862982] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:15.023 [2024-11-20 13:38:26.863277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.282 [2024-11-20 13:38:27.014400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.282 [2024-11-20 13:38:27.084011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.282 [2024-11-20 13:38:27.084093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.282 [2024-11-20 13:38:27.084110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.282 [2024-11-20 13:38:27.084121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.282 [2024-11-20 13:38:27.084130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.282 [2024-11-20 13:38:27.084634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.282 [2024-11-20 13:38:27.173178] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.282 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.540 [2024-11-20 13:38:27.238797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.540 null0 00:19:15.540 [2024-11-20 13:38:27.293625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.540 [2024-11-20 13:38:27.317775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:15.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80806 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80806 /var/tmp/bperf.sock 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80806 ']' 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:15.540 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.541 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:15.541 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.541 13:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.541 [2024-11-20 13:38:27.370484] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:15.541 [2024-11-20 13:38:27.370773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80806 ] 00:19:15.799 [2024-11-20 13:38:27.520454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.799 [2024-11-20 13:38:27.612512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.799 [2024-11-20 13:38:27.675907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.435 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.435 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:16.435 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:16.435 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:17.003 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:17.262 nvme0n1 00:19:17.262 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:17.262 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.262 13:38:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.262 13:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.262 13:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:17.262 13:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:17.262 Running I/O for 2 seconds... 00:19:17.262 [2024-11-20 13:38:29.200710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.262 [2024-11-20 13:38:29.200776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.262 [2024-11-20 13:38:29.200802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.521 [2024-11-20 13:38:29.218387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.521 [2024-11-20 13:38:29.218436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.521 [2024-11-20 13:38:29.218451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.521 [2024-11-20 13:38:29.235965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.521 [2024-11-20 13:38:29.236017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.521 [2024-11-20 13:38:29.236042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.521 [2024-11-20 13:38:29.253549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.521 [2024-11-20 13:38:29.253603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.253620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.271110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.271378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.288935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.288984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.306633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.306680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.306694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.324176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.324449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.342193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.342413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.342550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.360106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.360149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.360180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.377896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.378066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.378084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.395555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.395710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.395728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.413239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.413285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.413305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.430969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.431139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.431158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.448868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.448948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.448965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.522 [2024-11-20 13:38:29.466657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.522 [2024-11-20 13:38:29.466702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.522 [2024-11-20 13:38:29.466716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.484268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.484310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.484340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.501858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.502039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.502057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.519590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.519635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.519650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.537064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.537264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.537284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.554821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.555026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.572731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.572927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.573133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.590689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.590865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.591001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.608430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.608608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.608732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.626235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.626425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.626556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.644037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.644249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.644386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.662027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.662398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.680077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.680428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.698312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.698552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.698677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.716294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.716506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.716630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:17.782 [2024-11-20 13:38:29.734299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:17.782 [2024-11-20 13:38:29.734515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.782 [2024-11-20 13:38:29.734536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.751990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.752043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.752058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.769623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.769811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.769830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.787283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.787330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.787361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.805072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.805117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.805133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.822612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.822662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.822677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.840214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.840267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.840282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.857840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.858033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.876112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.876177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.893936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.893982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.043 [2024-11-20 13:38:29.893997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.043 [2024-11-20 13:38:29.911624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.043 [2024-11-20 13:38:29.911793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.044 [2024-11-20 13:38:29.911811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.044 [2024-11-20 13:38:29.929353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.044 [2024-11-20 13:38:29.929542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.044 [2024-11-20 13:38:29.929560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.044 [2024-11-20 13:38:29.947016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.044 [2024-11-20 13:38:29.947060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.044 [2024-11-20 13:38:29.947090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.044 [2024-11-20 13:38:29.964597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.044 [2024-11-20 13:38:29.964753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.044 [2024-11-20 13:38:29.964771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.044 [2024-11-20 13:38:29.982408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.044 [2024-11-20 13:38:29.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.044 [2024-11-20 13:38:29.982468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:29.999819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.000030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.000049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.017471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.017530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.035531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.035581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.035595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.053158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.053217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.053232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.070715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.070761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.070777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.088308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.088354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.088369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.105688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.105855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.105874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.123173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.123232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.123247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.140493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.140654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.140672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.158148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.158208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.158225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 14169.00 IOPS, 55.35 MiB/s [2024-11-20T13:38:30.260Z] [2024-11-20 13:38:30.175794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.175844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.175859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.193365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.193426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.193441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.210753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.210945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.228319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.228368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.228391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.303 [2024-11-20 13:38:30.245849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.303 [2024-11-20 13:38:30.246044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.303 [2024-11-20 13:38:30.246063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.562 [2024-11-20 13:38:30.263456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.562 [2024-11-20 13:38:30.263504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.562 [2024-11-20 13:38:30.263519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.562 [2024-11-20 13:38:30.280872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.562 [2024-11-20 13:38:30.281067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.562 [2024-11-20 13:38:30.281085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.298589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.298639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.298653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.323471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.323642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.323663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.341163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.341224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.341239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.358839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.359042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.359061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.377785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.377945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.377963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.395423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.395484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.395500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.413006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.413176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.413213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.430538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.430582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.430613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.447945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.448152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.448171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.466575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.466670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.484383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.484430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.484444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.563 [2024-11-20 13:38:30.501886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.563 [2024-11-20 13:38:30.502051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.563 [2024-11-20 13:38:30.502069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.519627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.519783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.519800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.537200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.537255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.537270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.555280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.555351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.555382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.572898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.573017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.573033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.590338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.590440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.607810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.607975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.625130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.625181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.642593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.642648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.642664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.660333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.660574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.660593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.678244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.678306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.678320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.696382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.696456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.696488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.714514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.714583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.732051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.732106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.732137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.749426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.749646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.749665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.823 [2024-11-20 13:38:30.767133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:18.823 [2024-11-20 13:38:30.767213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.823 [2024-11-20 13:38:30.767230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.784694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.784737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.784767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.802174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.802229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.802245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.819638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.819679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.819709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.836956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.836997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.837011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.853779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.853969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.853987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.871439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.871480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.871511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.888708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.888753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.888767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.907247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.907446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.925448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.925492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.925506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.942924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.943083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.943101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.082 [2024-11-20 13:38:30.960824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.082 [2024-11-20 13:38:30.960904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.082 [2024-11-20 13:38:30.960982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.083 [2024-11-20 13:38:30.978650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.083 [2024-11-20 13:38:30.978723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.083 [2024-11-20 13:38:30.978755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.083 [2024-11-20 13:38:30.996354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.083 [2024-11-20 13:38:30.996403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.083 [2024-11-20 13:38:30.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.083 [2024-11-20 13:38:31.013826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.083 [2024-11-20 13:38:31.014032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.083 [2024-11-20 13:38:31.014051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.083 [2024-11-20 13:38:31.031361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.083 [2024-11-20 13:38:31.031531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.083 [2024-11-20 13:38:31.031549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.049151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.049218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.049234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.066725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.066765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.066794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.085132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.085183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.085213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.103620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.103694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.103726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.122378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.122442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.122458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.140233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.140313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 [2024-11-20 13:38:31.157854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.158066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.158085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 14232.00 IOPS, 55.59 MiB/s [2024-11-20T13:38:31.299Z] [2024-11-20 13:38:31.175204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24d8230) 00:19:19.342 [2024-11-20 13:38:31.175256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.342 [2024-11-20 13:38:31.175288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.342 00:19:19.342 Latency(us) 00:19:19.342 [2024-11-20T13:38:31.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.342 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:19.342 nvme0n1 : 2.01 14257.48 55.69 0.00 0.00 8970.39 8221.79 33840.41 00:19:19.342 [2024-11-20T13:38:31.299Z] =================================================================================================================== 00:19:19.342 [2024-11-20T13:38:31.299Z] Total : 14257.48 55.69 0.00 0.00 8970.39 8221.79 33840.41 00:19:19.342 { 00:19:19.342 "results": [ 00:19:19.342 { 00:19:19.342 "job": "nvme0n1", 00:19:19.342 "core_mask": "0x2", 00:19:19.342 "workload": "randread", 00:19:19.342 "status": "finished", 00:19:19.342 "queue_depth": 128, 00:19:19.342 "io_size": 4096, 00:19:19.342 "runtime": 2.005403, 00:19:19.342 "iops": 14257.483408571743, 00:19:19.342 "mibps": 55.69329456473337, 00:19:19.342 "io_failed": 0, 00:19:19.342 "io_timeout": 0, 00:19:19.342 "avg_latency_us": 8970.390028997304, 00:19:19.342 "min_latency_us": 8221.789090909091, 00:19:19.342 "max_latency_us": 33840.40727272727 00:19:19.342 } 00:19:19.342 ], 00:19:19.342 "core_count": 1 00:19:19.342 } 00:19:19.342 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:19.342 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:19.342 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:19.342 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:19.342 | .driver_specific 00:19:19.342 | .nvme_error 00:19:19.342 | .status_code 00:19:19.342 | .command_transient_transport_error' 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80806 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80806 ']' 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80806 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80806 00:19:19.601 killing process with pid 80806 00:19:19.601 Received shutdown signal, test time was about 2.000000 seconds 00:19:19.601 00:19:19.601 Latency(us) 00:19:19.601 [2024-11-20T13:38:31.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.601 [2024-11-20T13:38:31.558Z] =================================================================================================================== 00:19:19.601 [2024-11-20T13:38:31.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80806' 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80806 00:19:19.601 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80806 00:19:19.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80866 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80866 /var/tmp/bperf.sock 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80866 ']' 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:19.861 13:38:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.861 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:19.861 Zero copy mechanism will not be used. 00:19:19.861 [2024-11-20 13:38:31.765226] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:19.861 [2024-11-20 13:38:31.765335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80866 ] 00:19:20.120 [2024-11-20 13:38:31.910771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.120 [2024-11-20 13:38:31.974624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.120 [2024-11-20 13:38:32.029737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.379 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.379 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:20.379 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:20.379 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.638 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.896 nvme0n1 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:20.896 13:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:20.896 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:20.896 Zero copy mechanism will not be used. 00:19:20.896 Running I/O for 2 seconds... 00:19:20.896 [2024-11-20 13:38:32.849690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:20.896 [2024-11-20 13:38:32.849749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.896 [2024-11-20 13:38:32.849767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.854025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.854067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.854083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.858396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.858446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.862775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.862816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.862830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.867112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.867151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.867165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.871526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.871713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.875992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.876033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.876048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.880388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.880427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.880441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.884732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.884772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.884786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.889238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.889279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.893550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.893590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.893605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.897839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.897879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.897893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.902196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.902236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.902250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.906453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.906493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.910823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.910863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.910877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.915128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.915170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.915195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.919511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.919551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.919565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.923814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.923853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.923867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.928219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.928254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.928267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.932451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.932501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.937170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.157 [2024-11-20 13:38:32.937218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.157 [2024-11-20 13:38:32.937233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.157 [2024-11-20 13:38:32.941740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.941782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.941797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.946178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.946238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.946253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.950595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.950637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.950651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.955017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.955057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.955071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.959383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.959422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.959437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.963651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.963691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.963705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.967947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.967988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.968002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.972199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.972239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.972254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.976437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.976475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.976490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.980755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.980794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.980808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.985146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.985209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.985224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.989883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.989921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.989935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.994734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.994774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.994788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:32.999452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:32.999610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:32.999627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.004343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.004382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.004396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.009107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.009147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.009161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.013882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.013923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.013938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.018549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.018699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.018717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.023388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.023427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.023441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.028246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.028286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.028301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.033225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.033263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.033277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.037951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.037988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.038002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.042694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.042732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.042746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.047312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.047358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.047372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.052046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.052085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.052099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.056721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.056875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.056893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.158 [2024-11-20 13:38:33.061522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.158 [2024-11-20 13:38:33.061561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.158 [2024-11-20 13:38:33.061575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.066312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.066350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.071051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.071090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.071103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.075709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.075861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.075879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.080904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.080951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.080965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.085597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.085639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.085654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.091196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.091391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.091409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.097842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.097881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.097920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.102801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.102952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.102970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.159 [2024-11-20 13:38:33.107571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.159 [2024-11-20 13:38:33.107611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.159 [2024-11-20 13:38:33.107625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.419 [2024-11-20 13:38:33.112652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.419 [2024-11-20 13:38:33.112690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.419 [2024-11-20 13:38:33.112704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.419 [2024-11-20 13:38:33.117440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.419 [2024-11-20 13:38:33.117592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.419 [2024-11-20 13:38:33.117609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.419 [2024-11-20 13:38:33.122391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.419 [2024-11-20 13:38:33.122437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.419 [2024-11-20 13:38:33.122451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.419 [2024-11-20 13:38:33.127034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.419 [2024-11-20 13:38:33.127073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.419 [2024-11-20 13:38:33.127087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.419 [2024-11-20 13:38:33.131663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.131702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.136390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.136561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.141247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.141288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.141302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.145896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.145935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.145950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.150639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.150677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.150692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.155328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.155366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.155379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.159999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.160038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.160052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.164670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.164711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.164725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.169434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.169487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.174048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.174086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.174100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.178759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.178917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.178935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.183585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.183623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.183638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.188181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.188230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.188244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.192931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.192969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.192983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.197604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.197755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.197773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.202478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.202517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.202532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.207162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.207213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.207227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.211804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.211841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.211854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.216451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.216489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.216502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.221132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.221169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.221182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.225785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.225822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.225837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.230564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.230601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.235273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.235324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.239949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.239989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.240004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.244665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.244718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.249362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.249401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.249415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.254080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.420 [2024-11-20 13:38:33.254118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.420 [2024-11-20 13:38:33.254132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.420 [2024-11-20 13:38:33.258725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.258886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.258903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.263569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.263608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.263623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.268265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.268302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.268316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.272583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.272622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.272636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.276854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.276917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.281126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.281164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.281177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.285533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.285687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.285704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.290025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.290065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.290079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.294416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.294453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.294467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.298704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.298742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.298756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.303065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.303106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.303120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.307529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.307571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.307585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.311820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.311860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.311874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.317046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.317214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.317233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.322592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.322631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.322645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.327159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.327210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.327225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.331528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.331569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.331584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.335830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.335869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.335883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.340156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.340208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.340224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.344581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.344733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.344751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.348993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.349033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.349047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.353383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.353422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.357616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.357654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.357668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.361911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.361950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.361964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.366211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.366249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.366262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.421 [2024-11-20 13:38:33.370524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.421 [2024-11-20 13:38:33.370563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.421 [2024-11-20 13:38:33.370576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.374930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.374970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.374985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.379199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.379235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.379249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.383570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.383731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.387980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.388022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.388038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.392363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.392403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.392416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.396653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.396692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.396706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.400914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.400957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.400970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.405268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.405309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.405324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.409682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.409722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.409736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.414000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.414039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.414053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.418313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.418354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.418368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.422588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.422627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.422641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.426852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.426890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.426904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.431082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.431123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.431138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.435412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.435477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.435491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.440830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.440896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.445741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.445901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.445920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.450179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.450228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.450242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.454531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.454570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.454584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.458769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.458807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.458821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.463034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.463074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.463088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.467170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.467224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.467239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.471378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.682 [2024-11-20 13:38:33.471415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.682 [2024-11-20 13:38:33.471429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.682 [2024-11-20 13:38:33.475863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.475901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.475915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.480163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.480215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.480229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.484493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.484646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.484664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.488949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.488988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.489003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.493271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.493310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.493324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.497563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.497601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.501885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.501924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.501938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.506228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.506268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.506282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.510623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.510664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.510678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.514999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.515039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.515053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.519516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.519671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.519690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.524087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.524127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.524141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.528375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.528413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.528426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.532680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.532719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.537032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.537071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.537084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.541403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.541444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.541458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.545676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.545715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.545729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.550044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.550084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.550098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.554370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.554411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.554425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.558730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.558769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.558783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.563120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.563159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.563174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.567520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.567561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.567575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.571770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.571809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.571823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.576080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.576119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.576132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.580406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.580445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.580460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.584742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.584781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.584795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.683 [2024-11-20 13:38:33.589005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.683 [2024-11-20 13:38:33.589045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.683 [2024-11-20 13:38:33.589059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.593285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.593337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.593351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.597620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.597659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.597673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.601979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.602018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.606372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.606413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.606427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.610668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.610707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.610720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.615006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.615044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.615058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.619330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.619373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.619387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.623635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.623674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.623687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.627911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.627950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.627964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.684 [2024-11-20 13:38:33.632251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.684 [2024-11-20 13:38:33.632291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.684 [2024-11-20 13:38:33.632306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.636573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.636728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.641057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.641097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.641111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.645415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.645456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.645470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.649770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.649809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.649823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.654088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.654127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.654141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.658408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.658449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.658463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.662766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.662805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.662819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.667080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.667118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.667132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.671477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.944 [2024-11-20 13:38:33.671518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.944 [2024-11-20 13:38:33.671533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.944 [2024-11-20 13:38:33.675804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.675843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.675858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.680085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.680125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.680139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.684519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.684575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.688933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.688972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.688986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.693247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.693285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.693299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.697564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.697602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.701884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.701923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.706132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.706174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.706200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.710458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.710496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.710510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.714716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.714755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.714769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.719113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.719155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.723462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.723503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.723518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.727800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.727839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.727853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.732165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.732217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.732231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.736450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.736491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.736505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.740747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.740786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.740800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.745068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.745106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.745120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.749516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.749675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.749693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.754030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.758436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.758474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.945 [2024-11-20 13:38:33.758488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.945 [2024-11-20 13:38:33.762733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.945 [2024-11-20 13:38:33.762772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.762786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.767061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.767100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.767114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.771443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.771484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.771499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.775726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.775765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.775779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.780074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.780112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.780127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.784772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.784964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.784982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.789311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.789351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.789365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.793626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.793665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.793680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.798571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.798608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.798622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.802927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.802967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.802981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.807305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.807345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.807360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.811691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.811731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.811745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.816034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.816073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.816087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.820423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.820463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.820478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.824807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.824848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.824862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.829147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.829219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.829236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.833470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.833508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.833522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.838406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.838463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.838487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.843340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.843383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.843399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 6882.00 IOPS, 860.25 MiB/s [2024-11-20T13:38:33.903Z] [2024-11-20 13:38:33.849532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.849685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.849703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.854051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.854092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.854106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.858342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.858380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.858394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.862642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.862681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.946 [2024-11-20 13:38:33.862696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.946 [2024-11-20 13:38:33.867008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.946 [2024-11-20 13:38:33.867047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.867061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.871310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.871348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.871362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.875653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.875692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.875706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.880001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.880040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.880055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.884327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.884368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.884382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.888709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.888749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.888763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.893079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.893118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.893132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.947 [2024-11-20 13:38:33.897544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:21.947 [2024-11-20 13:38:33.897587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.947 [2024-11-20 13:38:33.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.901900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.207 [2024-11-20 13:38:33.901957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.906234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.906271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.207 [2024-11-20 13:38:33.906284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.910470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.910508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.207 [2024-11-20 13:38:33.910522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.914795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.914833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.207 [2024-11-20 13:38:33.914847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.919146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.919199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.207 [2024-11-20 13:38:33.919214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.207 [2024-11-20 13:38:33.923470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.207 [2024-11-20 13:38:33.923508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.923521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.927857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.927895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.927909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.932202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.932240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.936455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.936507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.940718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.940771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.945063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.945100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.945114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.949349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.949386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.949399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.953598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.953631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.953645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.958269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.958325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.958340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.962660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.962833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.962862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.967753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.967798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.967814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.972148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.972210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.976556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.976596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.976610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.980989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.981030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.981045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.985397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.985438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.985452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.989675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.989716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.989730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.993999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.994051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:33.998267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:33.998305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:33.998318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.002587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.002625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.002639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.006941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.006978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.006992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.011847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.011893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.011906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.016402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.016452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.020761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.020799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.020812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.025073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.025111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.025125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.029607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.029656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.029669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.034035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.208 [2024-11-20 13:38:34.034094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.208 [2024-11-20 13:38:34.038472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.208 [2024-11-20 13:38:34.038510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.038523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.042803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.042840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.042853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.047102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.047158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.051475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.051512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.055726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.055764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.055777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.060034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.060073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.060086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.064347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.064384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.064398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.068818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.068856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.068869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.073250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.073286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.073300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.077563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.077600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.081847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.081885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.081898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.086171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.086220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.086234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.090487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.090526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.090539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.094809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.094847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.094861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.099123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.099160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.099174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.103478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.103516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.103530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.107755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.107798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.107811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.112100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.112152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.116435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.116472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.116486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.120750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.120787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.120801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.125137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.125195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.125210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.129505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.129542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.129556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.133834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.133873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.133886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.138231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.138266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.138280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.142487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.142525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.142538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.146815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.146853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.146866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.151044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.209 [2024-11-20 13:38:34.151081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.209 [2024-11-20 13:38:34.151094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.209 [2024-11-20 13:38:34.155513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.210 [2024-11-20 13:38:34.155551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.210 [2024-11-20 13:38:34.155564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.210 [2024-11-20 13:38:34.159887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.210 [2024-11-20 13:38:34.159925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.210 [2024-11-20 13:38:34.159938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.164148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.164196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.164211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.168457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.168495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.168508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.172864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.172902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.172924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.177118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.177155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.177169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.181435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.181471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.181485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.185695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.185733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.185746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.190016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.190054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.190067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.194344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.194381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.194395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.198630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.198666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.198680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.203011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.203050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.203075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.207378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.207416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.207429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.211742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.211780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.216085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.216123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.216137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.220369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.220406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.224617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.224656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.224669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.228806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.228843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.228856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.233051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.233088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.233109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.237200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.237235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.241481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.241518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.241532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.472 [2024-11-20 13:38:34.245849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.472 [2024-11-20 13:38:34.245887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.472 [2024-11-20 13:38:34.245900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.250125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.250163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.250176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.254439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.254476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.254489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.258764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.258802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.258815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.263110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.263148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.263162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.267372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.267409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.267422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.271747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.271785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.271798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.276069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.276107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.276119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.280393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.280429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.280442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.284767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.284804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.284817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.289099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.289148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.293364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.293400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.293413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.297631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.297668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.297681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.302004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.302056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.306312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.306349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.306362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.310568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.310604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.310618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.314854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.314892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.314905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.319168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.319215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.319230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.323541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.323580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.323593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.327867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.327904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.327917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.332115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.332151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.473 [2024-11-20 13:38:34.332164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.473 [2024-11-20 13:38:34.336422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.473 [2024-11-20 13:38:34.336459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.336473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.340688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.340725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.345067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.345106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.345119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.349315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.349351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.349365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.353563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.353600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.353613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.357828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.357866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.357879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.362103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.362140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.362153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.366497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.366535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.366549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.370832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.370869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.370883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.375137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.375174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.375198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.379462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.379499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.379512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.383758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.383795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.383808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.388049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.388086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.388099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.392312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.392348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.392362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.396627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.396665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.400984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.401020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.401033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.405306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.405342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.405355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.409607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.409656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.409669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.413944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.413982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.413995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.418295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.418346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.474 [2024-11-20 13:38:34.422593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.474 [2024-11-20 13:38:34.422630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.474 [2024-11-20 13:38:34.422643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.426932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.426970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.426983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.431234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.431269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.435556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.435593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.439788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.439826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.439839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.444142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.444180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.444207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.448476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.448516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.448530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.452819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.452857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.452871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.457175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.457223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.457238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.461546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.461584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.461597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.465898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.465937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.465950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.470195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.470232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.470245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.474443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.474480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.474493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.478674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.478711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.478724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.482971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.483008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.487286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.487321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.491593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.491630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.491644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.496009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.496048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.496061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.500324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.500361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.500375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.504611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.504649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.508923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.508960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.508974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.513204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.513239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.513253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.517449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.737 [2024-11-20 13:38:34.521761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.737 [2024-11-20 13:38:34.521798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.737 [2024-11-20 13:38:34.521811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.526054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.526091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.526104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.530461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.530499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.530512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.535005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.535043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.535056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.539470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.539507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.539521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.543807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.543844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.543858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.548146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.548197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.548212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.552569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.552608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.552622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.556934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.556971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.556984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.561271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.561308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.561322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.565467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.565505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.565518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.569788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.569826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.569839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.574063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.574100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.574113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.578306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.578343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.578356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.582515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.582552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.582565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.586787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.586825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.586838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.591023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.591060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.591074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.595359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.595409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.599650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.599687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.599701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.603912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.603950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.608221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.608259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.608272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.612506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.612557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.616760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.616797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.621024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.621061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.621074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.625285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.625321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.625334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.629508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.738 [2024-11-20 13:38:34.629545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.738 [2024-11-20 13:38:34.629559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.738 [2024-11-20 13:38:34.633758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.633794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.633808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.638058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.638095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.638108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.642393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.642429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.642443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.646689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.646727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.646740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.650992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.651030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.651043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.655287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.655319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.655332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.659559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.659596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.659609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.663897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.663938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.663952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.668265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.668302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.668315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.672581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.672619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.672632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.676888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.676934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.676947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.681174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.681222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.681236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.685699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.685741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.739 [2024-11-20 13:38:34.690000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.739 [2024-11-20 13:38:34.690040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.739 [2024-11-20 13:38:34.690054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.999 [2024-11-20 13:38:34.694288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.999 [2024-11-20 13:38:34.694325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.999 [2024-11-20 13:38:34.694339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.999 [2024-11-20 13:38:34.698592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.999 [2024-11-20 13:38:34.698632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.999 [2024-11-20 13:38:34.698645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.999 [2024-11-20 13:38:34.702960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.999 [2024-11-20 13:38:34.703000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.999 [2024-11-20 13:38:34.703013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.999 [2024-11-20 13:38:34.707326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:22.999 [2024-11-20 13:38:34.707365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.999 [2024-11-20 13:38:34.707378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.711701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.711752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.715980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.716017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.716030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.720317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.720374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.724619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.724657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.724670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.728925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.728969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.728983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.733208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.733244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.733257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.737472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.737509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.737523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.741637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.741674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.741687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.745982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.746020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.750346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.750382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.754709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.754746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.754759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.759131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.759170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.763451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.763490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.763503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.767751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.767788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.767802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.772166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.772229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.772245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.776692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.776732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.776746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.781033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.781071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.781085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.785340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.785377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.785391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.789657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.789708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.793905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.793941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.793955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.798165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.798213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.798227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.802484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.802523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.806865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.806903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.806917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.811165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.811212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.811226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.815472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.815510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.815523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.000 [2024-11-20 13:38:34.819779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.000 [2024-11-20 13:38:34.819817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.000 [2024-11-20 13:38:34.819830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.824104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.824141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.824155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.828356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.828392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.828404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.832655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.832706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.836994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.837045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.841289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.841326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.841339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.001 [2024-11-20 13:38:34.845615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c1400) 00:19:23.001 [2024-11-20 13:38:34.845653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.001 [2024-11-20 13:38:34.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.001 7013.50 IOPS, 876.69 MiB/s 00:19:23.001 Latency(us) 00:19:23.001 [2024-11-20T13:38:34.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:23.001 nvme0n1 : 2.00 7011.46 876.43 0.00 0.00 2278.48 1995.87 6881.28 00:19:23.001 [2024-11-20T13:38:34.958Z] =================================================================================================================== 00:19:23.001 [2024-11-20T13:38:34.958Z] Total : 7011.46 876.43 0.00 0.00 2278.48 1995.87 6881.28 00:19:23.001 { 00:19:23.001 "results": [ 00:19:23.001 { 00:19:23.001 "job": "nvme0n1", 00:19:23.001 "core_mask": "0x2", 00:19:23.001 "workload": "randread", 00:19:23.001 "status": "finished", 00:19:23.001 "queue_depth": 16, 00:19:23.001 "io_size": 131072, 00:19:23.001 "runtime": 2.002865, 00:19:23.001 "iops": 7011.45608915229, 00:19:23.001 "mibps": 876.4320111440362, 00:19:23.001 "io_failed": 0, 00:19:23.001 "io_timeout": 0, 00:19:23.001 "avg_latency_us": 2278.4809731150426, 00:19:23.001 "min_latency_us": 1995.8690909090908, 00:19:23.001 "max_latency_us": 6881.28 00:19:23.001 } 00:19:23.001 ], 00:19:23.001 "core_count": 1 00:19:23.001 } 00:19:23.001 13:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:23.001 13:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:23.001 13:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:23.001 | .driver_specific 00:19:23.001 | .nvme_error 00:19:23.001 | .status_code 00:19:23.001 | .command_transient_transport_error' 00:19:23.001 13:38:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 453 > 0 )) 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80866 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80866 ']' 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80866 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80866 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:23.260 killing process with pid 80866 00:19:23.260 Received shutdown signal, test time was about 2.000000 seconds 00:19:23.260 00:19:23.260 Latency(us) 00:19:23.260 [2024-11-20T13:38:35.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.260 [2024-11-20T13:38:35.217Z] =================================================================================================================== 00:19:23.260 [2024-11-20T13:38:35.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80866' 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80866 00:19:23.260 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80866 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80919 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80919 /var/tmp/bperf.sock 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80919 ']' 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:23.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.519 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.519 [2024-11-20 13:38:35.447321] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:23.519 [2024-11-20 13:38:35.447416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80919 ] 00:19:23.779 [2024-11-20 13:38:35.590423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.779 [2024-11-20 13:38:35.651359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.779 [2024-11-20 13:38:35.705699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:24.037 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.037 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:24.037 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:24.037 13:38:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.296 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.554 nvme0n1 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:24.554 13:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:24.813 Running I/O for 2 seconds... 00:19:24.813 [2024-11-20 13:38:36.568212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef7100 00:19:24.813 [2024-11-20 13:38:36.569894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.586056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef7970 00:19:24.813 [2024-11-20 13:38:36.587693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.587728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.602579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef81e0 00:19:24.813 [2024-11-20 13:38:36.604156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.604198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.618950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef8a50 00:19:24.813 [2024-11-20 13:38:36.620533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.620564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.635342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef92c0 00:19:24.813 [2024-11-20 13:38:36.636893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.636937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.651815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef9b30 00:19:24.813 [2024-11-20 13:38:36.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.653389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.668140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efa3a0 00:19:24.813 [2024-11-20 13:38:36.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.669711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.684795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efac10 00:19:24.813 [2024-11-20 13:38:36.686295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.686326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.701263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efb480 00:19:24.813 [2024-11-20 13:38:36.702722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.702755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.717633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efbcf0 00:19:24.813 [2024-11-20 13:38:36.719065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.719096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.733980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efc560 00:19:24.813 [2024-11-20 13:38:36.735402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.735432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.750356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efcdd0 00:19:24.813 [2024-11-20 13:38:36.751758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.751789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:24.813 [2024-11-20 13:38:36.766742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efd640 00:19:24.813 [2024-11-20 13:38:36.768106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.813 [2024-11-20 13:38:36.768136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.783196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efdeb0 00:19:25.072 [2024-11-20 13:38:36.784547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.784579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.799584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efe720 00:19:25.072 [2024-11-20 13:38:36.800905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.800942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.815964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eff3c8 00:19:25.072 [2024-11-20 13:38:36.817296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.817327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.840625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eff3c8 00:19:25.072 [2024-11-20 13:38:36.843251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.843288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.857138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efe720 00:19:25.072 [2024-11-20 13:38:36.859693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.859729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.873549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efdeb0 00:19:25.072 [2024-11-20 13:38:36.876071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.889966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efd640 00:19:25.072 [2024-11-20 13:38:36.892496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.906366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efcdd0 00:19:25.072 [2024-11-20 13:38:36.908846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.908878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.922848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efc560 00:19:25.072 [2024-11-20 13:38:36.925356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.925387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.939327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efbcf0 00:19:25.072 [2024-11-20 13:38:36.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.941827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.955768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efb480 00:19:25.072 [2024-11-20 13:38:36.958230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.958262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.072 [2024-11-20 13:38:36.972165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efac10 00:19:25.072 [2024-11-20 13:38:36.974627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-11-20 13:38:36.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.073 [2024-11-20 13:38:36.988632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016efa3a0 00:19:25.073 [2024-11-20 13:38:36.991044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-11-20 13:38:36.991077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.073 [2024-11-20 13:38:37.005107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef9b30 00:19:25.073 [2024-11-20 13:38:37.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-11-20 13:38:37.007529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.073 [2024-11-20 13:38:37.021562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef92c0 00:19:25.073 [2024-11-20 13:38:37.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-11-20 13:38:37.023949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.037933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef8a50 00:19:25.332 [2024-11-20 13:38:37.040265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.040296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.054367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef81e0 00:19:25.332 [2024-11-20 13:38:37.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.056713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.070707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef7970 00:19:25.332 [2024-11-20 13:38:37.073006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.073037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.087176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef7100 00:19:25.332 [2024-11-20 13:38:37.089479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.089510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.103584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef6890 00:19:25.332 [2024-11-20 13:38:37.105858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.105889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.119986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef6020 00:19:25.332 [2024-11-20 13:38:37.122244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.122275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.136362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef57b0 00:19:25.332 [2024-11-20 13:38:37.138588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.138618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.152768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef4f40 00:19:25.332 [2024-11-20 13:38:37.154985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.169159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef46d0 00:19:25.332 [2024-11-20 13:38:37.171340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.171372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.185627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef3e60 00:19:25.332 [2024-11-20 13:38:37.187780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.187811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.202007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef35f0 00:19:25.332 [2024-11-20 13:38:37.204121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.204151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.218739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef2d80 00:19:25.332 [2024-11-20 13:38:37.220858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.220889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.235870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef2510 00:19:25.332 [2024-11-20 13:38:37.238012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.238045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.252370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef1ca0 00:19:25.332 [2024-11-20 13:38:37.254484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.254526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.268805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef1430 00:19:25.332 [2024-11-20 13:38:37.270881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.270911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.332 [2024-11-20 13:38:37.285256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef0bc0 00:19:25.332 [2024-11-20 13:38:37.287296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.332 [2024-11-20 13:38:37.287328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.301675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef0350 00:19:25.591 [2024-11-20 13:38:37.303676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.303707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.318105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eefae0 00:19:25.591 [2024-11-20 13:38:37.320105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.320137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.334502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eef270 00:19:25.591 [2024-11-20 13:38:37.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.336510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.351011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeea00 00:19:25.591 [2024-11-20 13:38:37.352987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.353020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.368296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eee190 00:19:25.591 [2024-11-20 13:38:37.370265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.370298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.384885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eed920 00:19:25.591 [2024-11-20 13:38:37.386813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.386843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.401516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eed0b0 00:19:25.591 [2024-11-20 13:38:37.403506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.403541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.418103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eec840 00:19:25.591 [2024-11-20 13:38:37.419987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.420021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.434650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eebfd0 00:19:25.591 [2024-11-20 13:38:37.436491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.436525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.451245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeb760 00:19:25.591 [2024-11-20 13:38:37.453081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.591 [2024-11-20 13:38:37.453113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.591 [2024-11-20 13:38:37.467755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeaef0 00:19:25.591 [2024-11-20 13:38:37.469688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.592 [2024-11-20 13:38:37.469721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.592 [2024-11-20 13:38:37.484995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eea680 00:19:25.592 [2024-11-20 13:38:37.486826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.592 [2024-11-20 13:38:37.486866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.592 [2024-11-20 13:38:37.501608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee9e10 00:19:25.592 [2024-11-20 13:38:37.503386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.592 [2024-11-20 13:38:37.503421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.592 [2024-11-20 13:38:37.518238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee95a0 00:19:25.592 [2024-11-20 13:38:37.520002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.592 [2024-11-20 13:38:37.520036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.592 [2024-11-20 13:38:37.534701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee8d30 00:19:25.592 [2024-11-20 13:38:37.536429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.592 [2024-11-20 13:38:37.536462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.551246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee84c0 00:19:25.851 [2024-11-20 13:38:37.553099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.553131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.851 15182.00 IOPS, 59.30 MiB/s [2024-11-20T13:38:37.808Z] [2024-11-20 13:38:37.567853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee7c50 00:19:25.851 [2024-11-20 13:38:37.569588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.569619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.584374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee73e0 00:19:25.851 [2024-11-20 13:38:37.586065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.586097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.601538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee6b70 00:19:25.851 [2024-11-20 13:38:37.603299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.603330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.618135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee6300 00:19:25.851 [2024-11-20 13:38:37.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.619826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.634613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee5a90 00:19:25.851 [2024-11-20 13:38:37.636231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.636263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.651091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee5220 00:19:25.851 [2024-11-20 13:38:37.652701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.652732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.667545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee49b0 00:19:25.851 [2024-11-20 13:38:37.669118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.669149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.684071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee4140 00:19:25.851 [2024-11-20 13:38:37.685678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.685710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.700532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee38d0 00:19:25.851 [2024-11-20 13:38:37.702080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.702112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.717026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee3060 00:19:25.851 [2024-11-20 13:38:37.718609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.718639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.733514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee27f0 00:19:25.851 [2024-11-20 13:38:37.734992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.735023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.749885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee1f80 00:19:25.851 [2024-11-20 13:38:37.751361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.751390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.766249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee1710 00:19:25.851 [2024-11-20 13:38:37.767696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.851 [2024-11-20 13:38:37.767727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.851 [2024-11-20 13:38:37.782631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee0ea0 00:19:25.852 [2024-11-20 13:38:37.784054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.852 [2024-11-20 13:38:37.784084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.852 [2024-11-20 13:38:37.799004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee0630 00:19:25.852 [2024-11-20 13:38:37.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.852 [2024-11-20 13:38:37.800444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.815478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edfdc0 00:19:26.111 [2024-11-20 13:38:37.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.816890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.831878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edf550 00:19:26.111 [2024-11-20 13:38:37.833282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.833312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.848323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edece0 00:19:26.111 [2024-11-20 13:38:37.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.849718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.864716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ede470 00:19:26.111 [2024-11-20 13:38:37.866043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.866075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.888018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eddc00 00:19:26.111 [2024-11-20 13:38:37.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.890657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.904530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ede470 00:19:26.111 [2024-11-20 13:38:37.907107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.907140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.920938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edece0 00:19:26.111 [2024-11-20 13:38:37.923494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.923525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.937332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edf550 00:19:26.111 [2024-11-20 13:38:37.939851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.939884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.953794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016edfdc0 00:19:26.111 [2024-11-20 13:38:37.956331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.956361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.970260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee0630 00:19:26.111 [2024-11-20 13:38:37.972731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.972762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:37.986704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee0ea0 00:19:26.111 [2024-11-20 13:38:37.989201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:37.989232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:38.003216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee1710 00:19:26.111 [2024-11-20 13:38:38.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:38.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:38.019672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee1f80 00:19:26.111 [2024-11-20 13:38:38.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:38.022146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:38.036145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee27f0 00:19:26.111 [2024-11-20 13:38:38.038608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:38.038654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:26.111 [2024-11-20 13:38:38.052919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee3060 00:19:26.111 [2024-11-20 13:38:38.055314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.111 [2024-11-20 13:38:38.055353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.069487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee38d0 00:19:26.370 [2024-11-20 13:38:38.071859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.071893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.085989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee4140 00:19:26.370 [2024-11-20 13:38:38.088358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.088390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.102454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee49b0 00:19:26.370 [2024-11-20 13:38:38.104778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.104810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.118847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee5220 00:19:26.370 [2024-11-20 13:38:38.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.121227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.135325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee5a90 00:19:26.370 [2024-11-20 13:38:38.137628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.137660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.151702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee6300 00:19:26.370 [2024-11-20 13:38:38.153982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.168170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee6b70 00:19:26.370 [2024-11-20 13:38:38.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.170466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.184569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee73e0 00:19:26.370 [2024-11-20 13:38:38.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.186855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.201068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee7c50 00:19:26.370 [2024-11-20 13:38:38.203289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.217559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee84c0 00:19:26.370 [2024-11-20 13:38:38.219751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.219784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.233990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee8d30 00:19:26.370 [2024-11-20 13:38:38.236152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.236182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.250474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee95a0 00:19:26.370 [2024-11-20 13:38:38.252630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.252662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.266970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ee9e10 00:19:26.370 [2024-11-20 13:38:38.269114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.269145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.283487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eea680 00:19:26.370 [2024-11-20 13:38:38.285611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.285643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.299943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeaef0 00:19:26.370 [2024-11-20 13:38:38.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.302073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:26.370 [2024-11-20 13:38:38.316386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeb760 00:19:26.370 [2024-11-20 13:38:38.318482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.370 [2024-11-20 13:38:38.318516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.332808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eebfd0 00:19:26.630 [2024-11-20 13:38:38.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.334902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.349294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eec840 00:19:26.630 [2024-11-20 13:38:38.351316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.351348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.365687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eed0b0 00:19:26.630 [2024-11-20 13:38:38.367711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.367743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.382131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eed920 00:19:26.630 [2024-11-20 13:38:38.384122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.384155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.398611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eee190 00:19:26.630 [2024-11-20 13:38:38.400603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.415014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eeea00 00:19:26.630 [2024-11-20 13:38:38.416970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.417000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.431442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eef270 00:19:26.630 [2024-11-20 13:38:38.433383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.447786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016eefae0 00:19:26.630 [2024-11-20 13:38:38.449713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.449742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.464149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef0350 00:19:26.630 [2024-11-20 13:38:38.466024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.480545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef0bc0 00:19:26.630 [2024-11-20 13:38:38.482550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.482582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.497382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef1430 00:19:26.630 [2024-11-20 13:38:38.499261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.499291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.513977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef1ca0 00:19:26.630 [2024-11-20 13:38:38.515832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.530550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef2510 00:19:26.630 [2024-11-20 13:38:38.532441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.532473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:26.630 [2024-11-20 13:38:38.547095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0ae0) with pdu=0x200016ef2d80 00:19:26.630 [2024-11-20 13:38:38.548887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.630 [2024-11-20 13:38:38.548924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:26.630 15307.50 IOPS, 59.79 MiB/s 00:19:26.630 Latency(us) 00:19:26.630 [2024-11-20T13:38:38.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.630 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:26.630 nvme0n1 : 2.01 15316.51 59.83 0.00 0.00 8339.66 4557.73 31457.28 00:19:26.630 [2024-11-20T13:38:38.587Z] =================================================================================================================== 00:19:26.630 [2024-11-20T13:38:38.587Z] Total : 15316.51 59.83 0.00 0.00 8339.66 4557.73 31457.28 00:19:26.630 { 00:19:26.630 "results": [ 00:19:26.630 { 00:19:26.630 "job": "nvme0n1", 00:19:26.630 "core_mask": "0x2", 00:19:26.630 "workload": "randwrite", 00:19:26.630 "status": "finished", 00:19:26.630 "queue_depth": 128, 00:19:26.630 "io_size": 4096, 00:19:26.630 "runtime": 2.008552, 00:19:26.630 "iops": 15316.506617702704, 00:19:26.630 "mibps": 59.83010397540119, 00:19:26.630 "io_failed": 0, 00:19:26.630 "io_timeout": 0, 00:19:26.630 "avg_latency_us": 8339.66389617144, 00:19:26.630 "min_latency_us": 4557.730909090909, 00:19:26.630 "max_latency_us": 31457.28 00:19:26.630 } 00:19:26.630 ], 00:19:26.630 "core_count": 1 00:19:26.630 } 00:19:26.630 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:26.630 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:26.630 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:26.630 | .driver_specific 00:19:26.630 | .nvme_error 00:19:26.630 | .status_code 00:19:26.630 | .command_transient_transport_error' 00:19:26.630 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80919 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80919 ']' 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80919 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80919 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:27.200 killing process with pid 80919 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:27.200 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80919' 00:19:27.200 Received shutdown signal, test time was about 2.000000 seconds 00:19:27.200 00:19:27.200 Latency(us) 00:19:27.200 [2024-11-20T13:38:39.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.200 [2024-11-20T13:38:39.157Z] =================================================================================================================== 00:19:27.200 [2024-11-20T13:38:39.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.201 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80919 00:19:27.201 13:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80919 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80972 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80972 /var/tmp/bperf.sock 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80972 ']' 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.458 13:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:27.458 Zero copy mechanism will not be used. 00:19:27.458 [2024-11-20 13:38:39.221158] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:27.458 [2024-11-20 13:38:39.221268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80972 ] 00:19:27.458 [2024-11-20 13:38:39.362808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.716 [2024-11-20 13:38:39.422597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.716 [2024-11-20 13:38:39.476426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.283 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.283 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:28.283 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:28.283 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:28.850 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:29.107 nvme0n1 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:29.107 13:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:29.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:29.367 Zero copy mechanism will not be used. 00:19:29.367 Running I/O for 2 seconds... 00:19:29.367 [2024-11-20 13:38:41.074914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.075052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.075082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.081631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.081716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.081742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.087925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.088026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.088050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.094226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.094308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.094331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.100435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.100522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.100546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.106969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.107172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.107204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.113277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.113363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.113389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.119573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.119675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.119701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.125901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.126033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.126057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.132956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.133068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.133091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.140422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.140568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.146365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.146457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.146480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.151652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.151748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.151770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.156975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.157060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.157091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.162166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.162299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.162322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.167513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.167602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.167625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.172796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.172877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.172900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.178031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.178102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.178125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.182982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.183170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.183210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.188425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.188721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.188752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.193847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.194158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.194202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.199098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.199424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.367 [2024-11-20 13:38:41.199453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.367 [2024-11-20 13:38:41.204346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.367 [2024-11-20 13:38:41.204661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.204691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.209780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.210080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.210109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.215005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.215316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.220258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.220550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.220578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.225498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.225792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.225820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.230715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.231007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.231034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.235957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.236272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.236300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.241306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.241601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.241628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.246685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.246977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.247005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.252001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.252307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.252338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.257418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.257720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.257747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.262725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.263023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.263052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.268095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.268402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.268429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.273824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.274133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.274161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.279163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.279472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.279499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.284566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.284864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.284892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.289919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.290228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.290255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.295342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.295637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.295665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.300713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.301021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.306055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.306362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.306391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.311363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.311657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.311685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.368 [2024-11-20 13:38:41.316725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.368 [2024-11-20 13:38:41.317030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.368 [2024-11-20 13:38:41.317058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.322020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.322334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.322361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.327416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.327720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.327748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.332680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.332987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.338042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.338373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.338401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.343328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.343624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.343651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.348680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.349008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.354085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.354393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.628 [2024-11-20 13:38:41.354416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.628 [2024-11-20 13:38:41.359411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.628 [2024-11-20 13:38:41.359713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.359741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.364752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.365056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.365084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.370115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.370427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.370454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.375528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.375823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.375851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.380852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.381204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.386302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.386624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.391644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.391932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.391959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.397021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.397330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.397366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.402379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.402682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.402712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.407731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.408024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.408052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.413121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.413440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.413468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.418542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.418839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.418867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.423925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.424236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.424263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.429366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.429661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.429688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.434560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.434856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.434884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.439865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.440158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.440196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.445178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.445489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.445517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.450505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.450802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.450830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.455806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.456100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.456128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.461162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.461477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.461504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.466509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.466807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.466834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.471863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.472155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.472194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.477223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.477522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.477549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.482569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.482882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.482909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.487996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.488303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.488330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.493379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.493675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.493702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.498727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.499023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.499051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.629 [2024-11-20 13:38:41.504025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.629 [2024-11-20 13:38:41.504329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.629 [2024-11-20 13:38:41.504357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.509470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.514772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.515086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.515114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.520139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.520479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.525492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.525787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.525815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.530768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.531066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.531095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.536146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.536458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.536485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.541446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.541750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.541776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.546865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.547158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.547197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.552252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.552546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.552573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.557652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.557955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.562977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.563291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.563319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.568323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.568620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.568648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.573681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.573974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.574001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.630 [2024-11-20 13:38:41.579053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.630 [2024-11-20 13:38:41.579359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.630 [2024-11-20 13:38:41.579382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.584360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.584652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.584680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.589765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.590060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.590087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.595020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.595331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.595359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.600362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.600658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.600686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.605682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.605979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.606006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.611031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.611341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.611369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.616367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.616664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.616692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.621679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.621972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.621999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.627035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.627342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.627371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.632342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.632638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.632665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.637714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.638010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.638038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.643023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.643333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.643361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.648311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.648605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.648632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.890 [2024-11-20 13:38:41.653649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.890 [2024-11-20 13:38:41.653945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.890 [2024-11-20 13:38:41.653972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.659015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.659322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.659349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.664404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.664709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.664738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.669767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.670072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.670100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.675307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.675606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.675633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.680805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.681109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.681136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.686121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.686429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.686457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.691544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.691839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.691866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.696837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.697141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.697169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.702170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.702484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.702511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.707488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.707805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.707832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.712836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.713142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.713169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.718200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.718497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.718524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.723579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.723879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.723907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.728984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.729300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.729327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.734280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.734575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.734602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.739630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.739925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.739953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.744888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.745205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.745232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.750213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.750507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.750534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.755490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.755785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.755813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.760891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.761210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.761237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.766226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.766522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.766549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.771556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.771861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.771889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.776871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.777175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.777215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.782258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.782560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.782587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.787596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.787891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.792940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.793250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.793286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.798284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.798611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.803606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.803904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.803932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.808900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.809220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.809248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.814268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.814563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.814591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.819582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.819887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.819915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.824901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.825225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.825253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.830482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.830801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.830829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.835776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.836081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.836109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.891 [2024-11-20 13:38:41.841174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:29.891 [2024-11-20 13:38:41.841487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.891 [2024-11-20 13:38:41.841515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.846512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.846806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.846835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.851834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.852127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.852155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.857067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.857374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.857402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.862536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.862838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.862866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.867849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.868143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.868171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.873298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.873593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.873620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.878662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.878960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.878987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.884107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.884415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.884443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.889424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.889728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.889756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.894842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.895173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.900121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.900430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.900457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.905556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.905857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.152 [2024-11-20 13:38:41.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.152 [2024-11-20 13:38:41.911040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.152 [2024-11-20 13:38:41.911352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.911379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.916344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.916642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.916670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.921863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.922162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.922199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.927159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.927471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.927499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.932592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.932890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.932927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.937973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.938311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.938340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.943387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.943686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.943715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.948729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.949034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.954102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.954412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.954440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.959535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.959847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.959877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.964903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.965241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.965270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.970332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.970646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.970674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.975709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.976021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.976050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.981129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.981446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.981474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.986553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.986852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.986881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.991943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.992257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.992287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:41.997275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:41.997579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:41.997607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.002711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.003008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.003037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.008023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.008336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.013284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.013582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.013611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.018612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.018910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.023966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.024279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.024308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.029373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.029675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.029703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.034753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.035051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.035080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.040170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.040482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.040512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.045610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.045913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.045943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.050907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.153 [2024-11-20 13:38:42.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.153 [2024-11-20 13:38:42.056314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.153 [2024-11-20 13:38:42.056609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.056637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.061638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.061941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.061969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.066938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.067247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.067277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.072330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.072628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.072657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.154 5707.00 IOPS, 713.38 MiB/s [2024-11-20T13:38:42.111Z] [2024-11-20 13:38:42.078577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.078893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.078918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.084030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.084342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.089426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.089724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.089754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.094728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.095031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.095060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.100090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.100403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.100434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.154 [2024-11-20 13:38:42.105446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.154 [2024-11-20 13:38:42.105755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.154 [2024-11-20 13:38:42.105785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.414 [2024-11-20 13:38:42.110818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.414 [2024-11-20 13:38:42.111145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.414 [2024-11-20 13:38:42.111173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.414 [2024-11-20 13:38:42.116218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.414 [2024-11-20 13:38:42.116517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.414 [2024-11-20 13:38:42.116546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.414 [2024-11-20 13:38:42.121608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.414 [2024-11-20 13:38:42.121907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.126949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.127247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.127276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.132254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.132619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.132660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.137438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.137516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.137543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.142803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.142880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.148131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.148242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.153628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.153705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.153730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.158949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.159040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.164420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.164506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.164531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.169779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.169856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.169882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.175115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.175206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.175245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.180445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.180539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.185917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.186018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.186043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.191358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.191455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.191480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.196710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.196802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.196826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.201983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.202075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.202098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.207328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.207416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.207440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.212635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.212707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.212731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.218015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.218092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.218116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.223281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.223370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.223394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.228657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.228741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.228764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.234081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.234166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.234190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.239394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.239490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.239513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.244830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.244931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.244955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.250144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.250247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.250273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.415 [2024-11-20 13:38:42.255443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.415 [2024-11-20 13:38:42.255527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.415 [2024-11-20 13:38:42.255553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.260808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.260883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.260917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.266096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.266210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.271551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.271638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.271662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.277012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.277086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.277110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.282395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.282466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.282489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.287676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.287749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.287772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.292976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.293047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.293070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.298325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.298397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.298420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.303613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.303685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.303708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.308931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.309006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.314244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.314321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.314344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.319549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.319638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.319661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.324918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.325007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.325030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.330326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.330397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.330420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.335719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.335805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.335829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.341064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.341139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.341163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.346507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.346578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.346601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.352228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.352334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.352358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.357869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.357968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.357992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.363463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.363555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.416 [2024-11-20 13:38:42.368961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.416 [2024-11-20 13:38:42.369038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.416 [2024-11-20 13:38:42.369061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.374372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.374459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.379756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.379831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.379855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.385198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.385283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.385307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.390640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.390735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.390763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.395951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.396030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.396057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.401301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.401400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.406706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.406806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.412039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.412128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.412153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.417441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.417552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.417577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.422743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.422818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.422843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.428046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.428123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.428149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.433437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.438808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.438879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.438903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.444184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.678 [2024-11-20 13:38:42.444287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.678 [2024-11-20 13:38:42.444311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.678 [2024-11-20 13:38:42.449595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.449682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.455024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.455112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.460481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.460563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.460587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.465889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.465980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.466003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.471377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.471473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.471497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.476692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.476763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.476787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.482097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.482173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.482211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.487525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.487615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.487637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.492936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.493030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.498441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.498537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.498560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.503828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.503915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.503938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.509175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.509265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.509288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.514571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.514641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.519868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.519940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.519963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.525151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.525241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.525264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.530518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.530590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.530612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.535809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.535884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.535907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.541110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.541217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.546480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.546551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.546573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.551771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.551841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.551864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.557026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.557096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.557119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.562285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.562362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.562386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.567580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.567666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.572977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.573048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.573071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.578262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.578334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.578357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.583599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.583669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.679 [2024-11-20 13:38:42.583692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.679 [2024-11-20 13:38:42.588923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.679 [2024-11-20 13:38:42.589003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.589026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.594239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.594309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.594333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.599563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.599636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.599660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.605044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.605114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.605136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.610541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.610611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.610635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.615932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.616002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.616026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.621273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.621378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.621402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.680 [2024-11-20 13:38:42.626887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.680 [2024-11-20 13:38:42.626961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.680 [2024-11-20 13:38:42.626984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.632405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.632494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.632518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.637750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.637843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.643195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.643280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.648554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.648628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.648651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.653983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.654053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.654076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.659440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.659529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.659552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.664811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.664885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.664918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.670192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.670279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.670303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.675634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.675704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.675727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.680959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.681029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.681051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.686273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.686343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.686367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.691594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.691668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.691692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.696878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.696958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.696981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.702552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.702635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.702657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.940 [2024-11-20 13:38:42.708327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.940 [2024-11-20 13:38:42.708417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.940 [2024-11-20 13:38:42.708440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.714156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.714258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.714296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.719764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.719888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.725632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.725711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.725734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.731330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.731396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.731418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.736717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.736801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.743178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.743333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.750292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.750406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.755873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.755960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.755984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.761276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.761348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.761371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.766722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.766797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.766823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.772072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.772158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.772181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.777351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.777440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.782549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.782618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.782641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.787755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.787859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.792960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.793029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.793053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.798162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.798259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.798283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.803504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.803574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.808639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.808725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.808748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.813870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.813953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.813976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.819073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.819159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.824316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.824403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.824426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.829550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.829634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.829658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.834786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.834890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.840023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.840110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.840133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.845282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.845391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.850477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.941 [2024-11-20 13:38:42.850576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.941 [2024-11-20 13:38:42.850599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.941 [2024-11-20 13:38:42.855671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.855768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.855792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.861000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.861095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.866363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.866448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.866471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.871600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.871686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.871711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.876820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.876902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.876952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.882044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.882126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.882149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.887328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.887425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.887463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.942 [2024-11-20 13:38:42.892676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:30.942 [2024-11-20 13:38:42.892773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.942 [2024-11-20 13:38:42.892796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.897936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.898018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.903243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.903343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.903365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.908523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.908622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.908645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.913858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.913967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.913990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.919268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.919339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.919364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.924470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.924557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.924595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.929796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.929892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.934988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.935074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.935098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.940302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.940390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.940413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.947182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.947304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.947327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.953332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.201 [2024-11-20 13:38:42.953412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.201 [2024-11-20 13:38:42.953436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.201 [2024-11-20 13:38:42.958738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.958812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.958836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.964047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.964136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.964159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.969427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.969526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.974894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.974982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.980347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.980463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.985573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.985648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.985670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.990900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.990974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.990997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:42.996080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:42.996152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:42.996175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.001455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.001529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.001552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.006649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.006723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.006746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.011999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.012073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.012096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.017222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.017296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.017322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.022511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.022586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.022608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.027740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.027825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.027848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.033088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.033168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.033208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.038640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.038736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.038759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.044543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.044618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.044642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.050920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.050991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.051014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.056649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.056719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.056741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.063202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.063321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.063343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:31.202 [2024-11-20 13:38:43.069141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.069237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.069261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:31.202 5714.00 IOPS, 714.25 MiB/s [2024-11-20T13:38:43.159Z] [2024-11-20 13:38:43.075758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdad5b0) with pdu=0x200016eff3c8 00:19:31.202 [2024-11-20 13:38:43.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.202 [2024-11-20 13:38:43.075863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.202 00:19:31.202 Latency(us) 00:19:31.202 [2024-11-20T13:38:43.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.202 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:31.202 nvme0n1 : 2.00 5712.63 714.08 0.00 0.00 2794.71 1861.82 7626.01 00:19:31.202 [2024-11-20T13:38:43.159Z] =================================================================================================================== 00:19:31.203 [2024-11-20T13:38:43.160Z] Total : 5712.63 714.08 0.00 0.00 2794.71 1861.82 7626.01 00:19:31.203 { 00:19:31.203 "results": [ 00:19:31.203 { 00:19:31.203 "job": "nvme0n1", 00:19:31.203 "core_mask": "0x2", 00:19:31.203 "workload": "randwrite", 00:19:31.203 "status": "finished", 00:19:31.203 "queue_depth": 16, 00:19:31.203 "io_size": 131072, 00:19:31.203 "runtime": 2.004507, 00:19:31.203 "iops": 5712.62659596599, 00:19:31.203 "mibps": 714.0783244957488, 00:19:31.203 "io_failed": 0, 00:19:31.203 "io_timeout": 0, 00:19:31.203 "avg_latency_us": 2794.70566794484, 00:19:31.203 "min_latency_us": 1861.8181818181818, 00:19:31.203 "max_latency_us": 7626.007272727273 00:19:31.203 } 00:19:31.203 ], 00:19:31.203 "core_count": 1 00:19:31.203 } 00:19:31.203 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:31.203 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:31.203 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:31.203 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:31.203 | .driver_specific 00:19:31.203 | .nvme_error 00:19:31.203 | .status_code 00:19:31.203 | .command_transient_transport_error' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80972 ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.797 killing process with pid 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80972' 00:19:31.797 Received shutdown signal, test time was about 2.000000 seconds 00:19:31.797 00:19:31.797 Latency(us) 00:19:31.797 [2024-11-20T13:38:43.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.797 [2024-11-20T13:38:43.754Z] =================================================================================================================== 00:19:31.797 [2024-11-20T13:38:43.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80972 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80784 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80784 ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80784 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80784 00:19:31.797 killing process with pid 80784 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80784' 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80784 00:19:31.797 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80784 00:19:32.056 00:19:32.056 real 0m17.112s 00:19:32.056 user 0m33.963s 00:19:32.056 sys 0m4.628s 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.056 ************************************ 00:19:32.056 END TEST nvmf_digest_error 00:19:32.056 ************************************ 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.056 13:38:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:32.056 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.056 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:32.056 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.056 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.313 rmmod nvme_tcp 00:19:32.313 rmmod nvme_fabrics 00:19:32.313 rmmod nvme_keyring 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80784 ']' 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80784 00:19:32.313 Process with pid 80784 is not found 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80784 ']' 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80784 00:19:32.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80784) - No such process 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80784 is not found' 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.313 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:32.579 00:19:32.579 real 0m36.929s 00:19:32.579 user 1m11.287s 00:19:32.579 sys 0m9.749s 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:32.579 ************************************ 00:19:32.579 END TEST nvmf_digest 00:19:32.579 ************************************ 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.579 ************************************ 00:19:32.579 START TEST nvmf_host_multipath 00:19:32.579 ************************************ 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:32.579 * Looking for test storage... 00:19:32.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:19:32.579 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:32.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.855 --rc genhtml_branch_coverage=1 00:19:32.855 --rc genhtml_function_coverage=1 00:19:32.855 --rc genhtml_legend=1 00:19:32.855 --rc geninfo_all_blocks=1 00:19:32.855 --rc geninfo_unexecuted_blocks=1 00:19:32.855 00:19:32.855 ' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:32.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.855 --rc genhtml_branch_coverage=1 00:19:32.855 --rc genhtml_function_coverage=1 00:19:32.855 --rc genhtml_legend=1 00:19:32.855 --rc geninfo_all_blocks=1 00:19:32.855 --rc geninfo_unexecuted_blocks=1 00:19:32.855 00:19:32.855 ' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:32.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.855 --rc genhtml_branch_coverage=1 00:19:32.855 --rc genhtml_function_coverage=1 00:19:32.855 --rc genhtml_legend=1 00:19:32.855 --rc geninfo_all_blocks=1 00:19:32.855 --rc geninfo_unexecuted_blocks=1 00:19:32.855 00:19:32.855 ' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:32.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.855 --rc genhtml_branch_coverage=1 00:19:32.855 --rc genhtml_function_coverage=1 00:19:32.855 --rc genhtml_legend=1 00:19:32.855 --rc geninfo_all_blocks=1 00:19:32.855 --rc geninfo_unexecuted_blocks=1 00:19:32.855 00:19:32.855 ' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.855 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:32.855 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.856 Cannot find device "nvmf_init_br" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:32.856 Cannot find device "nvmf_init_br2" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:32.856 Cannot find device "nvmf_tgt_br" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.856 Cannot find device "nvmf_tgt_br2" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:32.856 Cannot find device "nvmf_init_br" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:32.856 Cannot find device "nvmf_init_br2" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:32.856 Cannot find device "nvmf_tgt_br" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:32.856 Cannot find device "nvmf_tgt_br2" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:32.856 Cannot find device "nvmf_br" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:32.856 Cannot find device "nvmf_init_if" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:32.856 Cannot find device "nvmf_init_if2" 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:32.856 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:19:33.114 00:19:33.114 --- 10.0.0.3 ping statistics --- 00:19:33.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.114 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:19:33.114 00:19:33.114 --- 10.0.0.4 ping statistics --- 00:19:33.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.114 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:33.114 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:33.115 00:19:33.115 --- 10.0.0.1 ping statistics --- 00:19:33.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.115 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:33.115 00:19:33.115 --- 10.0.0.2 ping statistics --- 00:19:33.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.115 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81295 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81295 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81295 ']' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.115 13:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:33.115 [2024-11-20 13:38:45.025572] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:33.115 [2024-11-20 13:38:45.025694] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.373 [2024-11-20 13:38:45.182744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:33.373 [2024-11-20 13:38:45.254675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.373 [2024-11-20 13:38:45.254752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.373 [2024-11-20 13:38:45.254767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.373 [2024-11-20 13:38:45.254777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.373 [2024-11-20 13:38:45.254786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.373 [2024-11-20 13:38:45.256093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.373 [2024-11-20 13:38:45.256106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.373 [2024-11-20 13:38:45.314912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81295 00:19:33.632 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:33.890 [2024-11-20 13:38:45.724319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.890 13:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:34.149 Malloc0 00:19:34.149 13:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:34.716 13:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:34.975 13:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:35.234 [2024-11-20 13:38:47.004297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.234 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:35.493 [2024-11-20 13:38:47.260456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:35.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81346 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81346 /var/tmp/bdevperf.sock 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81346 ']' 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.493 13:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 13:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.430 13:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:36.430 13:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:36.688 13:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:37.256 Nvme0n1 00:19:37.256 13:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:37.514 Nvme0n1 00:19:37.514 13:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.514 13:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:38.468 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:38.468 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:38.733 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:38.991 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:38.991 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81390 00:19:38.991 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:38.991 13:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:45.555 13:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:45.555 13:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.555 Attaching 4 probes... 00:19:45.555 @path[10.0.0.3, 4421]: 17008 00:19:45.555 @path[10.0.0.3, 4421]: 17232 00:19:45.555 @path[10.0.0.3, 4421]: 17288 00:19:45.555 @path[10.0.0.3, 4421]: 17269 00:19:45.555 @path[10.0.0.3, 4421]: 17352 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81390 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:45.555 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:46.122 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:46.122 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81509 00:19:46.122 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:46.122 13:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:52.681 13:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:52.681 13:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:52.681 Attaching 4 probes... 00:19:52.681 @path[10.0.0.3, 4420]: 16359 00:19:52.681 @path[10.0.0.3, 4420]: 17088 00:19:52.681 @path[10.0.0.3, 4420]: 17496 00:19:52.681 @path[10.0.0.3, 4420]: 16962 00:19:52.681 @path[10.0.0.3, 4420]: 17116 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81509 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:52.681 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:52.939 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:52.939 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81627 00:19:52.939 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:52.939 13:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:59.505 13:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:59.505 13:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:59.505 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:59.505 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.505 Attaching 4 probes... 00:19:59.505 @path[10.0.0.3, 4421]: 14729 00:19:59.505 @path[10.0.0.3, 4421]: 17333 00:19:59.505 @path[10.0.0.3, 4421]: 17375 00:19:59.505 @path[10.0.0.3, 4421]: 17268 00:19:59.505 @path[10.0.0.3, 4421]: 17252 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81627 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:59.506 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:59.764 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:59.764 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81734 00:19:59.764 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:59.764 13:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.324 Attaching 4 probes... 00:20:06.324 00:20:06.324 00:20:06.324 00:20:06.324 00:20:06.324 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81734 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:06.324 13:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:06.583 13:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:06.842 13:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:06.842 13:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81852 00:20:06.842 13:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:06.842 13:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.399 Attaching 4 probes... 00:20:13.399 @path[10.0.0.3, 4421]: 16715 00:20:13.399 @path[10.0.0.3, 4421]: 16897 00:20:13.399 @path[10.0.0.3, 4421]: 16750 00:20:13.399 @path[10.0.0.3, 4421]: 16516 00:20:13.399 @path[10.0.0.3, 4421]: 16743 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81852 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.399 13:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:13.399 13:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:14.402 13:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:14.402 13:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81976 00:20:14.402 13:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:14.402 13:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:20.963 Attaching 4 probes... 00:20:20.963 @path[10.0.0.3, 4420]: 16434 00:20:20.963 @path[10.0.0.3, 4420]: 16731 00:20:20.963 @path[10.0.0.3, 4420]: 16778 00:20:20.963 @path[10.0.0.3, 4420]: 16748 00:20:20.963 @path[10.0.0.3, 4420]: 16640 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81976 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:20.963 [2024-11-20 13:39:32.790123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:20.963 13:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:21.222 13:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:27.785 13:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:27.785 13:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82150 00:20:27.785 13:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81295 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:27.785 13:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:34.358 Attaching 4 probes... 00:20:34.358 @path[10.0.0.3, 4421]: 16361 00:20:34.358 @path[10.0.0.3, 4421]: 16542 00:20:34.358 @path[10.0.0.3, 4421]: 16720 00:20:34.358 @path[10.0.0.3, 4421]: 16839 00:20:34.358 @path[10.0.0.3, 4421]: 16744 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82150 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81346 ']' 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.358 killing process with pid 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81346' 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81346 00:20:34.358 { 00:20:34.358 "results": [ 00:20:34.358 { 00:20:34.358 "job": "Nvme0n1", 00:20:34.358 "core_mask": "0x4", 00:20:34.358 "workload": "verify", 00:20:34.358 "status": "terminated", 00:20:34.358 "verify_range": { 00:20:34.358 "start": 0, 00:20:34.358 "length": 16384 00:20:34.358 }, 00:20:34.358 "queue_depth": 128, 00:20:34.358 "io_size": 4096, 00:20:34.358 "runtime": 56.028188, 00:20:34.358 "iops": 7225.113187669035, 00:20:34.358 "mibps": 28.22309838933217, 00:20:34.358 "io_failed": 0, 00:20:34.358 "io_timeout": 0, 00:20:34.358 "avg_latency_us": 17680.90886522297, 00:20:34.358 "min_latency_us": 435.66545454545457, 00:20:34.358 "max_latency_us": 7015926.69090909 00:20:34.358 } 00:20:34.358 ], 00:20:34.358 "core_count": 1 00:20:34.358 } 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81346 00:20:34.358 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:34.358 [2024-11-20 13:38:47.337718] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:34.358 [2024-11-20 13:38:47.337840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81346 ] 00:20:34.358 [2024-11-20 13:38:47.487447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.358 [2024-11-20 13:38:47.557770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.358 [2024-11-20 13:38:47.617249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.358 Running I/O for 90 seconds... 00:20:34.358 6689.00 IOPS, 26.13 MiB/s [2024-11-20T13:39:46.315Z] 7568.50 IOPS, 29.56 MiB/s [2024-11-20T13:39:46.315Z] 7957.00 IOPS, 31.08 MiB/s [2024-11-20T13:39:46.315Z] 8125.25 IOPS, 31.74 MiB/s [2024-11-20T13:39:46.315Z] 8235.40 IOPS, 32.17 MiB/s [2024-11-20T13:39:46.315Z] 8296.83 IOPS, 32.41 MiB/s [2024-11-20T13:39:46.315Z] 8351.00 IOPS, 32.62 MiB/s [2024-11-20T13:39:46.315Z] 8404.12 IOPS, 32.83 MiB/s [2024-11-20T13:39:46.315Z] [2024-11-20 13:38:57.768541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.768953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.768980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.358 [2024-11-20 13:38:57.769651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.769975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.769990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.770013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.770028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.358 [2024-11-20 13:38:57.770057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.358 [2024-11-20 13:38:57.770074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.770111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.770158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.770207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.770246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.770284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.770970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.770992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.771946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.771975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.772001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.772037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.359 [2024-11-20 13:38:57.772074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.772111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.772147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.772197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.359 [2024-11-20 13:38:57.772239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.359 [2024-11-20 13:38:57.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.772805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.772841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.772878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.772931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.772964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.773444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.773459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.360 [2024-11-20 13:38:57.774753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.774970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.775674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.360 [2024-11-20 13:38:57.775689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:34.360 [2024-11-20 13:38:57.776015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.776391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.776951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.776985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.777010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.777980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.777995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.361 [2024-11-20 13:38:57.778306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.361 [2024-11-20 13:38:57.778525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.361 [2024-11-20 13:38:57.778540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.778868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.787994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.788754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.788977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.788992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.362 [2024-11-20 13:38:57.789386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.789983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.789998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:34.362 [2024-11-20 13:38:57.790348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.362 [2024-11-20 13:38:57.790364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.790722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.790963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.790978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.791275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.793348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.793397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.793970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.793985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.363 [2024-11-20 13:38:57.794900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.794995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.795033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.363 [2024-11-20 13:38:57.795048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.363 [2024-11-20 13:38:57.795070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.795085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.795166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.795218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.795968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.795990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.796817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.796833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.805957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.805988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.364 [2024-11-20 13:38:57.806251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.806299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.806336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.806373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.364 [2024-11-20 13:38:57.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.364 [2024-11-20 13:38:57.806410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.806747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.806763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:38:57.808147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:38:57.808180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.365 8332.00 IOPS, 32.55 MiB/s [2024-11-20T13:39:46.322Z] 8357.80 IOPS, 32.65 MiB/s [2024-11-20T13:39:46.322Z] 8376.36 IOPS, 32.72 MiB/s [2024-11-20T13:39:46.322Z] 8406.83 IOPS, 32.84 MiB/s [2024-11-20T13:39:46.322Z] 8400.15 IOPS, 32.81 MiB/s [2024-11-20T13:39:46.322Z] 8421.86 IOPS, 32.90 MiB/s [2024-11-20T13:39:46.322Z] [2024-11-20 13:39:04.392657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.392743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.392807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.392830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.392853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.392869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.392936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.392963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.392996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.393974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.393995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.365 [2024-11-20 13:39:04.394860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.394978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.394999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.395035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.395072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.395109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.395146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.365 [2024-11-20 13:39:04.395195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.365 [2024-11-20 13:39:04.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.395939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.395976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.395998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.396639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.396891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.396906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.397764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.397797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.397833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.397851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.397880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.397925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.397940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.397969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.397984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.366 [2024-11-20 13:39:04.398583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.398628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.366 [2024-11-20 13:39:04.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.366 [2024-11-20 13:39:04.398701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:04.398745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:04.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:04.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:04.398890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:04.398935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:04.398951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.367 8422.00 IOPS, 32.90 MiB/s [2024-11-20T13:39:46.324Z] 7911.69 IOPS, 30.91 MiB/s [2024-11-20T13:39:46.324Z] 7956.41 IOPS, 31.08 MiB/s [2024-11-20T13:39:46.324Z] 7996.17 IOPS, 31.24 MiB/s [2024-11-20T13:39:46.324Z] 8032.16 IOPS, 31.38 MiB/s [2024-11-20T13:39:46.324Z] 8063.05 IOPS, 31.50 MiB/s [2024-11-20T13:39:46.324Z] 8089.67 IOPS, 31.60 MiB/s [2024-11-20T13:39:46.324Z] 8113.23 IOPS, 31.69 MiB/s [2024-11-20T13:39:46.324Z] [2024-11-20 13:39:11.643339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.643728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.643964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.643978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.644015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.644050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.644948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.644982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.367 [2024-11-20 13:39:11.645923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.645968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.645990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.367 [2024-11-20 13:39:11.646349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.367 [2024-11-20 13:39:11.646364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.646708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.646972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.646995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.647482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.647743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.647758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:11.648523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.648936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.648979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:11.649460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:11.649476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:34.368 7840.83 IOPS, 30.63 MiB/s [2024-11-20T13:39:46.325Z] 7514.12 IOPS, 29.35 MiB/s [2024-11-20T13:39:46.325Z] 7213.56 IOPS, 28.18 MiB/s [2024-11-20T13:39:46.325Z] 6936.12 IOPS, 27.09 MiB/s [2024-11-20T13:39:46.325Z] 6679.22 IOPS, 26.09 MiB/s [2024-11-20T13:39:46.325Z] 6440.68 IOPS, 25.16 MiB/s [2024-11-20T13:39:46.325Z] 6218.59 IOPS, 24.29 MiB/s [2024-11-20T13:39:46.325Z] 6230.57 IOPS, 24.34 MiB/s [2024-11-20T13:39:46.325Z] 6302.61 IOPS, 24.62 MiB/s [2024-11-20T13:39:46.325Z] 6369.66 IOPS, 24.88 MiB/s [2024-11-20T13:39:46.325Z] 6428.27 IOPS, 25.11 MiB/s [2024-11-20T13:39:46.325Z] 6484.38 IOPS, 25.33 MiB/s [2024-11-20T13:39:46.325Z] 6537.06 IOPS, 25.54 MiB/s [2024-11-20T13:39:46.325Z] [2024-11-20 13:39:25.178045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.368 [2024-11-20 13:39:25.178463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:25.178536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.368 [2024-11-20 13:39:25.178572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:34.368 [2024-11-20 13:39:25.178594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.178956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.179972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.179989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.369 [2024-11-20 13:39:25.180910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.180972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.369 [2024-11-20 13:39:25.180986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.369 [2024-11-20 13:39:25.181001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:34.370 [2024-11-20 13:39:25.181663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.181984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.182014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.182042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.182071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.182101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.182130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe290 is same with the state(6) to be set 00:20:34.370 [2024-11-20 13:39:25.182161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46112 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46504 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46512 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46520 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46528 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46536 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46544 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46552 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.182605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:34.370 [2024-11-20 13:39:25.182615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:34.370 [2024-11-20 13:39:25.182625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46560 len:8 PRP1 0x0 PRP2 0x0 00:20:34.370 [2024-11-20 13:39:25.182649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.183877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:34.370 [2024-11-20 13:39:25.183963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.370 [2024-11-20 13:39:25.183987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.370 [2024-11-20 13:39:25.184020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f1d0 (9): Bad file descriptor 00:20:34.370 [2024-11-20 13:39:25.184471] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:34.370 [2024-11-20 13:39:25.184508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2f1d0 with addr=10.0.0.3, port=4421 00:20:34.370 [2024-11-20 13:39:25.184526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f1d0 is same with the state(6) to be set 00:20:34.370 [2024-11-20 13:39:25.184586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f1d0 (9): Bad file descriptor 00:20:34.370 [2024-11-20 13:39:25.184627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:34.370 [2024-11-20 13:39:25.184656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:34.370 [2024-11-20 13:39:25.184672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:34.370 [2024-11-20 13:39:25.184692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:34.370 [2024-11-20 13:39:25.184707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:34.370 6588.17 IOPS, 25.74 MiB/s [2024-11-20T13:39:46.327Z] 6638.43 IOPS, 25.93 MiB/s [2024-11-20T13:39:46.327Z] 6679.95 IOPS, 26.09 MiB/s [2024-11-20T13:39:46.327Z] 6725.08 IOPS, 26.27 MiB/s [2024-11-20T13:39:46.327Z] 6766.15 IOPS, 26.43 MiB/s [2024-11-20T13:39:46.327Z] 6806.20 IOPS, 26.59 MiB/s [2024-11-20T13:39:46.327Z] 6841.29 IOPS, 26.72 MiB/s [2024-11-20T13:39:46.327Z] 6878.28 IOPS, 26.87 MiB/s [2024-11-20T13:39:46.327Z] 6910.32 IOPS, 26.99 MiB/s [2024-11-20T13:39:46.327Z] 6944.49 IOPS, 27.13 MiB/s [2024-11-20T13:39:46.327Z] [2024-11-20 13:39:35.233907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:34.370 6974.50 IOPS, 27.24 MiB/s [2024-11-20T13:39:46.327Z] 7010.87 IOPS, 27.39 MiB/s [2024-11-20T13:39:46.327Z] 7042.15 IOPS, 27.51 MiB/s [2024-11-20T13:39:46.327Z] 7071.82 IOPS, 27.62 MiB/s [2024-11-20T13:39:46.327Z] 7092.06 IOPS, 27.70 MiB/s [2024-11-20T13:39:46.327Z] 7113.78 IOPS, 27.79 MiB/s [2024-11-20T13:39:46.327Z] 7136.21 IOPS, 27.88 MiB/s [2024-11-20T13:39:46.327Z] 7159.15 IOPS, 27.97 MiB/s [2024-11-20T13:39:46.327Z] 7182.87 IOPS, 28.06 MiB/s [2024-11-20T13:39:46.327Z] 7203.98 IOPS, 28.14 MiB/s [2024-11-20T13:39:46.327Z] 7226.04 IOPS, 28.23 MiB/s [2024-11-20T13:39:46.327Z] Received shutdown signal, test time was about 56.028903 seconds 00:20:34.370 00:20:34.370 Latency(us) 00:20:34.370 [2024-11-20T13:39:46.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.370 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.370 Verification LBA range: start 0x0 length 0x4000 00:20:34.370 Nvme0n1 : 56.03 7225.11 28.22 0.00 0.00 17680.91 435.67 7015926.69 00:20:34.370 [2024-11-20T13:39:46.327Z] =================================================================================================================== 00:20:34.370 [2024-11-20T13:39:46.327Z] Total : 7225.11 28.22 0.00 0.00 17680.91 435.67 7015926.69 00:20:34.370 13:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.370 rmmod nvme_tcp 00:20:34.370 rmmod nvme_fabrics 00:20:34.370 rmmod nvme_keyring 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81295 ']' 00:20:34.370 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81295 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81295 ']' 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81295 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81295 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.371 killing process with pid 81295 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81295' 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81295 00:20:34.371 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81295 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:34.629 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:34.630 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:34.888 00:20:34.888 real 1m2.363s 00:20:34.888 user 2m53.803s 00:20:34.888 sys 0m18.301s 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:34.888 ************************************ 00:20:34.888 END TEST nvmf_host_multipath 00:20:34.888 ************************************ 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.888 ************************************ 00:20:34.888 START TEST nvmf_timeout 00:20:34.888 ************************************ 00:20:34.888 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:35.214 * Looking for test storage... 00:20:35.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.214 --rc genhtml_branch_coverage=1 00:20:35.214 --rc genhtml_function_coverage=1 00:20:35.214 --rc genhtml_legend=1 00:20:35.214 --rc geninfo_all_blocks=1 00:20:35.214 --rc geninfo_unexecuted_blocks=1 00:20:35.214 00:20:35.214 ' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.214 --rc genhtml_branch_coverage=1 00:20:35.214 --rc genhtml_function_coverage=1 00:20:35.214 --rc genhtml_legend=1 00:20:35.214 --rc geninfo_all_blocks=1 00:20:35.214 --rc geninfo_unexecuted_blocks=1 00:20:35.214 00:20:35.214 ' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.214 --rc genhtml_branch_coverage=1 00:20:35.214 --rc genhtml_function_coverage=1 00:20:35.214 --rc genhtml_legend=1 00:20:35.214 --rc geninfo_all_blocks=1 00:20:35.214 --rc geninfo_unexecuted_blocks=1 00:20:35.214 00:20:35.214 ' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.214 --rc genhtml_branch_coverage=1 00:20:35.214 --rc genhtml_function_coverage=1 00:20:35.214 --rc genhtml_legend=1 00:20:35.214 --rc geninfo_all_blocks=1 00:20:35.214 --rc geninfo_unexecuted_blocks=1 00:20:35.214 00:20:35.214 ' 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.214 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.215 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.215 13:39:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.215 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:35.216 Cannot find device "nvmf_init_br" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:35.216 Cannot find device "nvmf_init_br2" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:35.216 Cannot find device "nvmf_tgt_br" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.216 Cannot find device "nvmf_tgt_br2" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:35.216 Cannot find device "nvmf_init_br" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:35.216 Cannot find device "nvmf_init_br2" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:35.216 Cannot find device "nvmf_tgt_br" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:35.216 Cannot find device "nvmf_tgt_br2" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:35.216 Cannot find device "nvmf_br" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:35.216 Cannot find device "nvmf_init_if" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:35.216 Cannot find device "nvmf_init_if2" 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.216 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:35.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:35.475 00:20:35.475 --- 10.0.0.3 ping statistics --- 00:20:35.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.475 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:35.475 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:35.475 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:20:35.475 00:20:35.475 --- 10.0.0.4 ping statistics --- 00:20:35.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.475 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:35.475 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:35.734 00:20:35.734 --- 10.0.0.1 ping statistics --- 00:20:35.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.734 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:35.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:35.734 00:20:35.734 --- 10.0.0.2 ping statistics --- 00:20:35.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.734 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82507 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82507 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82507 ']' 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.734 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:35.734 [2024-11-20 13:39:47.529048] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:35.734 [2024-11-20 13:39:47.529201] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.734 [2024-11-20 13:39:47.685910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.993 [2024-11-20 13:39:47.759671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.993 [2024-11-20 13:39:47.759753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.993 [2024-11-20 13:39:47.759768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.993 [2024-11-20 13:39:47.759779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.993 [2024-11-20 13:39:47.759788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.993 [2024-11-20 13:39:47.761097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.993 [2024-11-20 13:39:47.761112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.993 [2024-11-20 13:39:47.819829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.993 13:39:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:36.250 [2024-11-20 13:39:48.172158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.250 13:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:36.817 Malloc0 00:20:36.817 13:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.076 13:39:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.334 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:37.592 [2024-11-20 13:39:49.502114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:37.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82550 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82550 /var/tmp/bdevperf.sock 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82550 ']' 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.592 13:39:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 [2024-11-20 13:39:49.576235] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:37.851 [2024-11-20 13:39:49.576320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82550 ] 00:20:37.851 [2024-11-20 13:39:49.726148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.851 [2024-11-20 13:39:49.800659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.110 [2024-11-20 13:39:49.865045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.678 13:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.678 13:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:38.678 13:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:38.937 13:39:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:39.505 NVMe0n1 00:20:39.505 13:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82578 00:20:39.505 13:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.505 13:39:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:39.505 Running I/O for 10 seconds... 00:20:40.441 13:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.702 6805.00 IOPS, 26.58 MiB/s [2024-11-20T13:39:52.659Z] [2024-11-20 13:39:52.429270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.702 [2024-11-20 13:39:52.429340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.702 [2024-11-20 13:39:52.429527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.702 [2024-11-20 13:39:52.429596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.702 [2024-11-20 13:39:52.429607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.429984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.429995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.703 [2024-11-20 13:39:52.430304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.703 [2024-11-20 13:39:52.430313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.430993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.431003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.431015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.704 [2024-11-20 13:39:52.431024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.704 [2024-11-20 13:39:52.431036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.705 [2024-11-20 13:39:52.431709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.705 [2024-11-20 13:39:52.431723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.706 [2024-11-20 13:39:52.431845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1827f60 is same with the state(6) to be set 00:20:40.706 [2024-11-20 13:39:52.431868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.431876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.431884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63632 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.431894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.431913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.431921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63656 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.431931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.431948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.431956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63664 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.431965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.431974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.431981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.431989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63672 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.431999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63680 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63688 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63704 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63712 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63720 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63728 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63736 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63744 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63752 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63760 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:40.706 [2024-11-20 13:39:52.432418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:40.706 [2024-11-20 13:39:52.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63768 len:8 PRP1 0x0 PRP2 0x0 00:20:40.706 [2024-11-20 13:39:52.432435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.706 [2024-11-20 13:39:52.432572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.707 [2024-11-20 13:39:52.432595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.707 [2024-11-20 13:39:52.432607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.707 [2024-11-20 13:39:52.432616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.707 [2024-11-20 13:39:52.432627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.707 [2024-11-20 13:39:52.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.707 [2024-11-20 13:39:52.432647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.707 [2024-11-20 13:39:52.432679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.707 [2024-11-20 13:39:52.432691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bae50 is same with the state(6) to be set 00:20:40.707 [2024-11-20 13:39:52.432960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.707 [2024-11-20 13:39:52.432989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bae50 (9): Bad file descriptor 00:20:40.707 [2024-11-20 13:39:52.433106] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-11-20 13:39:52.433128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bae50 with addr=10.0.0.3, port=4420 00:20:40.707 [2024-11-20 13:39:52.433139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bae50 is same with the state(6) to be set 00:20:40.707 [2024-11-20 13:39:52.433158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bae50 (9): Bad file descriptor 00:20:40.707 [2024-11-20 13:39:52.433175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.707 [2024-11-20 13:39:52.433198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.707 [2024-11-20 13:39:52.433211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.707 [2024-11-20 13:39:52.433223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.707 [2024-11-20 13:39:52.433234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.707 13:39:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:42.652 3922.00 IOPS, 15.32 MiB/s [2024-11-20T13:39:54.609Z] 2614.67 IOPS, 10.21 MiB/s [2024-11-20T13:39:54.609Z] [2024-11-20 13:39:54.433614] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.652 [2024-11-20 13:39:54.433700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bae50 with addr=10.0.0.3, port=4420 00:20:42.652 [2024-11-20 13:39:54.433716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bae50 is same with the state(6) to be set 00:20:42.652 [2024-11-20 13:39:54.433746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bae50 (9): Bad file descriptor 00:20:42.652 [2024-11-20 13:39:54.433767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:42.652 [2024-11-20 13:39:54.433777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:42.652 [2024-11-20 13:39:54.433790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:42.652 [2024-11-20 13:39:54.433802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:42.652 [2024-11-20 13:39:54.433814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:42.652 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:42.652 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:42.652 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:42.911 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:42.911 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:42.911 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:42.911 13:39:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:43.169 13:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:43.169 13:39:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:44.364 1961.00 IOPS, 7.66 MiB/s [2024-11-20T13:39:56.580Z] 1568.80 IOPS, 6.13 MiB/s [2024-11-20T13:39:56.580Z] [2024-11-20 13:39:56.434059] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:44.623 [2024-11-20 13:39:56.434139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bae50 with addr=10.0.0.3, port=4420 00:20:44.623 [2024-11-20 13:39:56.434156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bae50 is same with the state(6) to be set 00:20:44.623 [2024-11-20 13:39:56.434200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bae50 (9): Bad file descriptor 00:20:44.623 [2024-11-20 13:39:56.434242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:44.623 [2024-11-20 13:39:56.434255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:44.623 [2024-11-20 13:39:56.434267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:44.623 [2024-11-20 13:39:56.434279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:44.623 [2024-11-20 13:39:56.434291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:46.491 1307.33 IOPS, 5.11 MiB/s [2024-11-20T13:39:58.448Z] 1120.57 IOPS, 4.38 MiB/s [2024-11-20T13:39:58.448Z] [2024-11-20 13:39:58.434501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:46.491 [2024-11-20 13:39:58.434552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:46.491 [2024-11-20 13:39:58.434565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:46.491 [2024-11-20 13:39:58.434576] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:46.491 [2024-11-20 13:39:58.434589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:47.725 980.50 IOPS, 3.83 MiB/s 00:20:47.725 Latency(us) 00:20:47.725 [2024-11-20T13:39:59.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.725 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.725 Verification LBA range: start 0x0 length 0x4000 00:20:47.725 NVMe0n1 : 8.16 961.62 3.76 15.69 0.00 130777.85 3678.95 7046430.72 00:20:47.725 [2024-11-20T13:39:59.682Z] =================================================================================================================== 00:20:47.725 [2024-11-20T13:39:59.682Z] Total : 961.62 3.76 15.69 0.00 130777.85 3678.95 7046430.72 00:20:47.725 { 00:20:47.725 "results": [ 00:20:47.725 { 00:20:47.725 "job": "NVMe0n1", 00:20:47.725 "core_mask": "0x4", 00:20:47.725 "workload": "verify", 00:20:47.725 "status": "finished", 00:20:47.725 "verify_range": { 00:20:47.725 "start": 0, 00:20:47.725 "length": 16384 00:20:47.725 }, 00:20:47.725 "queue_depth": 128, 00:20:47.725 "io_size": 4096, 00:20:47.725 "runtime": 8.157083, 00:20:47.725 "iops": 961.6182647644997, 00:20:47.725 "mibps": 3.756321346736327, 00:20:47.725 "io_failed": 128, 00:20:47.725 "io_timeout": 0, 00:20:47.725 "avg_latency_us": 130777.85327920449, 00:20:47.725 "min_latency_us": 3678.9527272727273, 00:20:47.725 "max_latency_us": 7046430.72 00:20:47.725 } 00:20:47.725 ], 00:20:47.725 "core_count": 1 00:20:47.725 } 00:20:48.291 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:48.291 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.291 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:48.549 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:48.549 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:48.549 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:48.549 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82578 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82550 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82550 ']' 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82550 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82550 00:20:48.808 killing process with pid 82550 00:20:48.808 Received shutdown signal, test time was about 9.415074 seconds 00:20:48.808 00:20:48.808 Latency(us) 00:20:48.808 [2024-11-20T13:40:00.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.808 [2024-11-20T13:40:00.765Z] =================================================================================================================== 00:20:48.808 [2024-11-20T13:40:00.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82550' 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82550 00:20:48.808 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82550 00:20:49.066 13:40:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:49.324 [2024-11-20 13:40:01.122411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82702 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82702 /var/tmp/bdevperf.sock 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82702 ']' 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.324 13:40:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:49.324 [2024-11-20 13:40:01.202689] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:49.324 [2024-11-20 13:40:01.202831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82702 ] 00:20:49.582 [2024-11-20 13:40:01.348424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.582 [2024-11-20 13:40:01.411471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.582 [2024-11-20 13:40:01.465909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:50.538 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.538 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:50.538 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:50.538 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:51.105 NVMe0n1 00:20:51.105 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82720 00:20:51.105 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.105 13:40:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:51.105 Running I/O for 10 seconds... 00:20:52.048 13:40:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:52.310 6812.00 IOPS, 26.61 MiB/s [2024-11-20T13:40:04.267Z] [2024-11-20 13:40:04.061444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.310 [2024-11-20 13:40:04.061507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.310 [2024-11-20 13:40:04.061891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.310 [2024-11-20 13:40:04.061902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.061912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.061923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.061932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.061943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.061953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.061964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.061973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.061984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.061993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.311 [2024-11-20 13:40:04.062700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.311 [2024-11-20 13:40:04.062709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.062980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.062989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.312 [2024-11-20 13:40:04.063423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.312 [2024-11-20 13:40:04.063432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.063778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.063988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.063999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.064008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.064028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.064073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.313 [2024-11-20 13:40:04.064093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.313 [2024-11-20 13:40:04.064113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efff60 is same with the state(6) to be set 00:20:52.313 [2024-11-20 13:40:04.064134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:52.313 [2024-11-20 13:40:04.064142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:52.313 [2024-11-20 13:40:04.064150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:20:52.313 [2024-11-20 13:40:04.064159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.313 [2024-11-20 13:40:04.064333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.313 [2024-11-20 13:40:04.064361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.313 [2024-11-20 13:40:04.064380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.313 [2024-11-20 13:40:04.064398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.313 [2024-11-20 13:40:04.064407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:20:52.314 [2024-11-20 13:40:04.064623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:52.314 [2024-11-20 13:40:04.064652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:20:52.314 [2024-11-20 13:40:04.064751] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.314 [2024-11-20 13:40:04.064773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:20:52.314 [2024-11-20 13:40:04.064784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:20:52.314 [2024-11-20 13:40:04.064802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:20:52.314 [2024-11-20 13:40:04.064818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:52.314 [2024-11-20 13:40:04.064828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:52.314 [2024-11-20 13:40:04.064839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:52.314 [2024-11-20 13:40:04.064850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:52.314 [2024-11-20 13:40:04.064862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:52.314 13:40:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:53.249 3917.50 IOPS, 15.30 MiB/s [2024-11-20T13:40:05.207Z] [2024-11-20 13:40:05.065030] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.250 [2024-11-20 13:40:05.065123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:20:53.250 [2024-11-20 13:40:05.065141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:20:53.250 [2024-11-20 13:40:05.065169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:20:53.250 [2024-11-20 13:40:05.065202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:53.250 [2024-11-20 13:40:05.065215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:53.250 [2024-11-20 13:40:05.065226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:53.250 [2024-11-20 13:40:05.065238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:53.250 [2024-11-20 13:40:05.065250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:53.250 13:40:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:53.508 [2024-11-20 13:40:05.394534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82720 00:20:54.332 2611.67 IOPS, 10.20 MiB/s [2024-11-20T13:40:06.289Z] [2024-11-20 13:40:06.079664] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:56.226 1958.75 IOPS, 7.65 MiB/s [2024-11-20T13:40:09.118Z] 3064.00 IOPS, 11.97 MiB/s [2024-11-20T13:40:10.053Z] 4052.83 IOPS, 15.83 MiB/s [2024-11-20T13:40:10.988Z] 4750.86 IOPS, 18.56 MiB/s [2024-11-20T13:40:11.921Z] 5290.00 IOPS, 20.66 MiB/s [2024-11-20T13:40:13.296Z] 5683.56 IOPS, 22.20 MiB/s [2024-11-20T13:40:13.296Z] 6008.30 IOPS, 23.47 MiB/s 00:21:01.339 Latency(us) 00:21:01.339 [2024-11-20T13:40:13.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.339 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.339 Verification LBA range: start 0x0 length 0x4000 00:21:01.339 NVMe0n1 : 10.01 6014.01 23.49 0.00 0.00 21239.27 1593.72 3019898.88 00:21:01.339 [2024-11-20T13:40:13.296Z] =================================================================================================================== 00:21:01.339 [2024-11-20T13:40:13.296Z] Total : 6014.01 23.49 0.00 0.00 21239.27 1593.72 3019898.88 00:21:01.339 { 00:21:01.339 "results": [ 00:21:01.339 { 00:21:01.339 "job": "NVMe0n1", 00:21:01.339 "core_mask": "0x4", 00:21:01.339 "workload": "verify", 00:21:01.339 "status": "finished", 00:21:01.339 "verify_range": { 00:21:01.339 "start": 0, 00:21:01.339 "length": 16384 00:21:01.339 }, 00:21:01.339 "queue_depth": 128, 00:21:01.339 "io_size": 4096, 00:21:01.339 "runtime": 10.008796, 00:21:01.339 "iops": 6014.010076736503, 00:21:01.339 "mibps": 23.492226862251965, 00:21:01.339 "io_failed": 0, 00:21:01.339 "io_timeout": 0, 00:21:01.339 "avg_latency_us": 21239.270134884304, 00:21:01.339 "min_latency_us": 1593.7163636363637, 00:21:01.339 "max_latency_us": 3019898.88 00:21:01.339 } 00:21:01.339 ], 00:21:01.339 "core_count": 1 00:21:01.339 } 00:21:01.339 13:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82825 00:21:01.339 13:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.339 13:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:01.339 Running I/O for 10 seconds... 00:21:02.275 13:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:02.536 6677.00 IOPS, 26.08 MiB/s [2024-11-20T13:40:14.493Z] [2024-11-20 13:40:14.270884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.536 [2024-11-20 13:40:14.270951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.536 [2024-11-20 13:40:14.270976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.536 [2024-11-20 13:40:14.270988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.536 [2024-11-20 13:40:14.271009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.537 [2024-11-20 13:40:14.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.537 [2024-11-20 13:40:14.271805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.537 [2024-11-20 13:40:14.271814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.271987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.271996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.538 [2024-11-20 13:40:14.272570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.538 [2024-11-20 13:40:14.272579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.272981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.272992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.539 [2024-11-20 13:40:14.273365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.539 [2024-11-20 13:40:14.273374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.540 [2024-11-20 13:40:14.273659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f010e0 is same with the state(6) to be set 00:21:02.540 [2024-11-20 13:40:14.273681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.540 [2024-11-20 13:40:14.273689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.540 [2024-11-20 13:40:14.273697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64592 len:8 PRP1 0x0 PRP2 0x0 00:21:02.540 [2024-11-20 13:40:14.273706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.540 [2024-11-20 13:40:14.273991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:02.540 [2024-11-20 13:40:14.274074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:21:02.540 [2024-11-20 13:40:14.274181] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.540 [2024-11-20 13:40:14.274217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:21:02.540 [2024-11-20 13:40:14.274236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:21:02.540 [2024-11-20 13:40:14.274254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:21:02.540 [2024-11-20 13:40:14.274271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:02.540 [2024-11-20 13:40:14.274280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:02.540 [2024-11-20 13:40:14.274291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:02.540 [2024-11-20 13:40:14.274302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:02.540 [2024-11-20 13:40:14.274313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:02.540 13:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:03.477 3978.50 IOPS, 15.54 MiB/s [2024-11-20T13:40:15.434Z] [2024-11-20 13:40:15.274534] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.477 [2024-11-20 13:40:15.274622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:21:03.477 [2024-11-20 13:40:15.274640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:21:03.477 [2024-11-20 13:40:15.274668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:21:03.477 [2024-11-20 13:40:15.274689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:03.477 [2024-11-20 13:40:15.274700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:03.477 [2024-11-20 13:40:15.274710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:03.477 [2024-11-20 13:40:15.274723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:03.477 [2024-11-20 13:40:15.274735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:04.472 2652.33 IOPS, 10.36 MiB/s [2024-11-20T13:40:16.429Z] [2024-11-20 13:40:16.274885] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.472 [2024-11-20 13:40:16.274964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:21:04.472 [2024-11-20 13:40:16.274982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:21:04.472 [2024-11-20 13:40:16.275009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:21:04.472 [2024-11-20 13:40:16.275029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:04.472 [2024-11-20 13:40:16.275040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:04.472 [2024-11-20 13:40:16.275051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:04.472 [2024-11-20 13:40:16.275062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:04.472 [2024-11-20 13:40:16.275075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:05.410 1989.25 IOPS, 7.77 MiB/s [2024-11-20T13:40:17.367Z] [2024-11-20 13:40:17.278617] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.410 [2024-11-20 13:40:17.278708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92e50 with addr=10.0.0.3, port=4420 00:21:05.410 [2024-11-20 13:40:17.278726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92e50 is same with the state(6) to be set 00:21:05.410 [2024-11-20 13:40:17.278966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92e50 (9): Bad file descriptor 00:21:05.410 [2024-11-20 13:40:17.279197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:05.410 [2024-11-20 13:40:17.279245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:05.410 [2024-11-20 13:40:17.279256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:05.410 [2024-11-20 13:40:17.279269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:05.410 [2024-11-20 13:40:17.279280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:05.410 13:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.669 [2024-11-20 13:40:17.580142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.669 13:40:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82825 00:21:06.495 1591.40 IOPS, 6.22 MiB/s [2024-11-20T13:40:18.453Z] [2024-11-20 13:40:18.309723] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:08.366 2472.67 IOPS, 9.66 MiB/s [2024-11-20T13:40:21.259Z] 3430.29 IOPS, 13.40 MiB/s [2024-11-20T13:40:22.194Z] 4163.50 IOPS, 16.26 MiB/s [2024-11-20T13:40:23.130Z] 4721.33 IOPS, 18.44 MiB/s [2024-11-20T13:40:23.130Z] 5153.20 IOPS, 20.13 MiB/s 00:21:11.173 Latency(us) 00:21:11.173 [2024-11-20T13:40:23.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.173 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.173 Verification LBA range: start 0x0 length 0x4000 00:21:11.173 NVMe0n1 : 10.01 5160.07 20.16 3680.94 0.00 14447.37 700.04 3019898.88 00:21:11.173 [2024-11-20T13:40:23.130Z] =================================================================================================================== 00:21:11.173 [2024-11-20T13:40:23.130Z] Total : 5160.07 20.16 3680.94 0.00 14447.37 0.00 3019898.88 00:21:11.173 { 00:21:11.173 "results": [ 00:21:11.173 { 00:21:11.173 "job": "NVMe0n1", 00:21:11.173 "core_mask": "0x4", 00:21:11.173 "workload": "verify", 00:21:11.173 "status": "finished", 00:21:11.173 "verify_range": { 00:21:11.173 "start": 0, 00:21:11.173 "length": 16384 00:21:11.173 }, 00:21:11.173 "queue_depth": 128, 00:21:11.173 "io_size": 4096, 00:21:11.173 "runtime": 10.00994, 00:21:11.173 "iops": 5160.070889535801, 00:21:11.173 "mibps": 20.156526912249223, 00:21:11.173 "io_failed": 36846, 00:21:11.173 "io_timeout": 0, 00:21:11.173 "avg_latency_us": 14447.365309457431, 00:21:11.173 "min_latency_us": 700.0436363636363, 00:21:11.173 "max_latency_us": 3019898.88 00:21:11.173 } 00:21:11.173 ], 00:21:11.173 "core_count": 1 00:21:11.173 } 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82702 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82702 ']' 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82702 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.173 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82702 00:21:11.432 killing process with pid 82702 00:21:11.432 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.432 00:21:11.432 Latency(us) 00:21:11.432 [2024-11-20T13:40:23.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.432 [2024-11-20T13:40:23.389Z] =================================================================================================================== 00:21:11.432 [2024-11-20T13:40:23.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82702' 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82702 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82702 00:21:11.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82940 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82940 /var/tmp/bdevperf.sock 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82940 ']' 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.432 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:11.691 [2024-11-20 13:40:23.411863] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:11.691 [2024-11-20 13:40:23.412239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82940 ] 00:21:11.691 [2024-11-20 13:40:23.560901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.691 [2024-11-20 13:40:23.619246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.949 [2024-11-20 13:40:23.676460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:11.950 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.950 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:11.950 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82948 00:21:11.950 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:11.950 13:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82940 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:12.208 13:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:12.467 NVMe0n1 00:21:12.725 13:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82994 00:21:12.725 13:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:12.725 13:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.725 Running I/O for 10 seconds... 00:21:13.660 13:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:13.922 14351.00 IOPS, 56.06 MiB/s [2024-11-20T13:40:25.879Z] [2024-11-20 13:40:25.728838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.728994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.922 [2024-11-20 13:40:25.729565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.729994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.730002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.730010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.730017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x505ac0 is same with the state(6) to be set 00:21:13.923 [2024-11-20 13:40:25.730081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.923 [2024-11-20 13:40:25.730401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.923 [2024-11-20 13:40:25.730410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.730989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.730998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.924 [2024-11-20 13:40:25.731279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.924 [2024-11-20 13:40:25.731289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.731980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.731991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.925 [2024-11-20 13:40:25.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.925 [2024-11-20 13:40:25.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.926 [2024-11-20 13:40:25.732826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.926 [2024-11-20 13:40:25.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.927 [2024-11-20 13:40:25.732846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.927 [2024-11-20 13:40:25.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.927 [2024-11-20 13:40:25.732867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.927 [2024-11-20 13:40:25.732876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.927 [2024-11-20 13:40:25.732886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143ee20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 13:40:25.732898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:13.927 [2024-11-20 13:40:25.732906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:13.927 [2024-11-20 13:40:25.732919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:21:13.927 [2024-11-20 13:40:25.732937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.927 [2024-11-20 13:40:25.733265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:13.927 [2024-11-20 13:40:25.733352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e50 (9): Bad file descriptor 00:21:13.927 [2024-11-20 13:40:25.733480] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.927 [2024-11-20 13:40:25.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1e50 with addr=10.0.0.3, port=4420 00:21:13.927 [2024-11-20 13:40:25.733513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e50 is same with the state(6) to be set 00:21:13.927 [2024-11-20 13:40:25.733531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e50 (9): Bad file descriptor 00:21:13.927 [2024-11-20 13:40:25.733547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:13.927 [2024-11-20 13:40:25.733557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:13.927 [2024-11-20 13:40:25.733567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:13.927 [2024-11-20 13:40:25.733578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:13.927 [2024-11-20 13:40:25.733597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:13.927 13:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82994 00:21:15.798 8319.50 IOPS, 32.50 MiB/s [2024-11-20T13:40:27.755Z] 5546.33 IOPS, 21.67 MiB/s [2024-11-20T13:40:27.755Z] [2024-11-20 13:40:27.733900] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.798 [2024-11-20 13:40:27.734297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1e50 with addr=10.0.0.3, port=4420 00:21:15.798 [2024-11-20 13:40:27.734325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e50 is same with the state(6) to be set 00:21:15.798 [2024-11-20 13:40:27.734360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e50 (9): Bad file descriptor 00:21:15.798 [2024-11-20 13:40:27.734381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:15.798 [2024-11-20 13:40:27.734391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:15.798 [2024-11-20 13:40:27.734404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:15.798 [2024-11-20 13:40:27.734416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:15.798 [2024-11-20 13:40:27.734427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:17.667 4159.75 IOPS, 16.25 MiB/s [2024-11-20T13:40:29.882Z] 3327.80 IOPS, 13.00 MiB/s [2024-11-20T13:40:29.882Z] [2024-11-20 13:40:29.734653] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.925 [2024-11-20 13:40:29.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d1e50 with addr=10.0.0.3, port=4420 00:21:17.925 [2024-11-20 13:40:29.734762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1e50 is same with the state(6) to be set 00:21:17.925 [2024-11-20 13:40:29.734789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1e50 (9): Bad file descriptor 00:21:17.925 [2024-11-20 13:40:29.734808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:17.925 [2024-11-20 13:40:29.734819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:17.925 [2024-11-20 13:40:29.734830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:17.925 [2024-11-20 13:40:29.734841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:17.925 [2024-11-20 13:40:29.734852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:19.800 2773.17 IOPS, 10.83 MiB/s [2024-11-20T13:40:31.757Z] 2377.00 IOPS, 9.29 MiB/s [2024-11-20T13:40:31.757Z] [2024-11-20 13:40:31.734920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:19.800 [2024-11-20 13:40:31.734978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:19.800 [2024-11-20 13:40:31.734991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:19.800 [2024-11-20 13:40:31.735002] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:19.800 [2024-11-20 13:40:31.735015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:20.994 2079.88 IOPS, 8.12 MiB/s 00:21:20.994 Latency(us) 00:21:20.994 [2024-11-20T13:40:32.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.994 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:20.994 NVMe0n1 : 8.17 2035.98 7.95 15.66 0.00 62279.19 8460.10 7015926.69 00:21:20.994 [2024-11-20T13:40:32.951Z] =================================================================================================================== 00:21:20.994 [2024-11-20T13:40:32.951Z] Total : 2035.98 7.95 15.66 0.00 62279.19 8460.10 7015926.69 00:21:20.994 { 00:21:20.994 "results": [ 00:21:20.994 { 00:21:20.994 "job": "NVMe0n1", 00:21:20.994 "core_mask": "0x4", 00:21:20.994 "workload": "randread", 00:21:20.994 "status": "finished", 00:21:20.994 "queue_depth": 128, 00:21:20.994 "io_size": 4096, 00:21:20.994 "runtime": 8.172492, 00:21:20.994 "iops": 2035.97629707071, 00:21:20.994 "mibps": 7.953032410432461, 00:21:20.994 "io_failed": 128, 00:21:20.994 "io_timeout": 0, 00:21:20.994 "avg_latency_us": 62279.187312740934, 00:21:20.994 "min_latency_us": 8460.101818181818, 00:21:20.994 "max_latency_us": 7015926.69090909 00:21:20.994 } 00:21:20.994 ], 00:21:20.994 "core_count": 1 00:21:20.994 } 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:20.994 Attaching 5 probes... 00:21:20.994 1418.843713: reset bdev controller NVMe0 00:21:20.994 1418.983666: reconnect bdev controller NVMe0 00:21:20.994 3419.316469: reconnect delay bdev controller NVMe0 00:21:20.994 3419.344511: reconnect bdev controller NVMe0 00:21:20.994 5420.060798: reconnect delay bdev controller NVMe0 00:21:20.994 5420.103817: reconnect bdev controller NVMe0 00:21:20.994 7420.482008: reconnect delay bdev controller NVMe0 00:21:20.994 7420.504060: reconnect bdev controller NVMe0 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82948 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82940 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82940 ']' 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82940 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82940 00:21:20.994 killing process with pid 82940 00:21:20.994 Received shutdown signal, test time was about 8.242986 seconds 00:21:20.994 00:21:20.994 Latency(us) 00:21:20.994 [2024-11-20T13:40:32.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.994 [2024-11-20T13:40:32.951Z] =================================================================================================================== 00:21:20.994 [2024-11-20T13:40:32.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82940' 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82940 00:21:20.994 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82940 00:21:21.254 13:40:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.513 rmmod nvme_tcp 00:21:21.513 rmmod nvme_fabrics 00:21:21.513 rmmod nvme_keyring 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82507 ']' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82507 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82507 ']' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82507 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82507 00:21:21.513 killing process with pid 82507 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82507' 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82507 00:21:21.513 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82507 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:21.772 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:22.031 00:21:22.031 real 0m47.090s 00:21:22.031 user 2m18.306s 00:21:22.031 sys 0m5.763s 00:21:22.031 ************************************ 00:21:22.031 END TEST nvmf_timeout 00:21:22.031 ************************************ 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:22.031 00:21:22.031 real 5m15.769s 00:21:22.031 user 13m45.980s 00:21:22.031 sys 1m10.038s 00:21:22.031 ************************************ 00:21:22.031 END TEST nvmf_host 00:21:22.031 ************************************ 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.031 13:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.031 13:40:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:22.031 13:40:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:22.031 ************************************ 00:21:22.031 END TEST nvmf_tcp 00:21:22.031 ************************************ 00:21:22.031 00:21:22.031 real 13m22.937s 00:21:22.031 user 32m22.548s 00:21:22.031 sys 3m14.703s 00:21:22.031 13:40:33 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.031 13:40:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.290 13:40:34 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:22.290 13:40:34 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:22.290 13:40:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:22.290 13:40:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.290 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:22.290 ************************************ 00:21:22.290 START TEST nvmf_dif 00:21:22.290 ************************************ 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:22.290 * Looking for test storage... 00:21:22.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.290 --rc genhtml_branch_coverage=1 00:21:22.290 --rc genhtml_function_coverage=1 00:21:22.290 --rc genhtml_legend=1 00:21:22.290 --rc geninfo_all_blocks=1 00:21:22.290 --rc geninfo_unexecuted_blocks=1 00:21:22.290 00:21:22.290 ' 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.290 --rc genhtml_branch_coverage=1 00:21:22.290 --rc genhtml_function_coverage=1 00:21:22.290 --rc genhtml_legend=1 00:21:22.290 --rc geninfo_all_blocks=1 00:21:22.290 --rc geninfo_unexecuted_blocks=1 00:21:22.290 00:21:22.290 ' 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.290 --rc genhtml_branch_coverage=1 00:21:22.290 --rc genhtml_function_coverage=1 00:21:22.290 --rc genhtml_legend=1 00:21:22.290 --rc geninfo_all_blocks=1 00:21:22.290 --rc geninfo_unexecuted_blocks=1 00:21:22.290 00:21:22.290 ' 00:21:22.290 13:40:34 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.290 --rc genhtml_branch_coverage=1 00:21:22.290 --rc genhtml_function_coverage=1 00:21:22.290 --rc genhtml_legend=1 00:21:22.290 --rc geninfo_all_blocks=1 00:21:22.290 --rc geninfo_unexecuted_blocks=1 00:21:22.290 00:21:22.290 ' 00:21:22.290 13:40:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.290 13:40:34 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.290 13:40:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.290 13:40:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.290 13:40:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.290 13:40:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:22.290 13:40:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.290 13:40:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.291 13:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:22.291 13:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:22.291 13:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:22.291 13:40:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:22.291 13:40:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.291 13:40:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:22.291 13:40:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.291 13:40:34 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:22.549 Cannot find device "nvmf_init_br" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:22.549 Cannot find device "nvmf_init_br2" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:22.549 Cannot find device "nvmf_tgt_br" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.549 Cannot find device "nvmf_tgt_br2" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:22.549 Cannot find device "nvmf_init_br" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:22.549 Cannot find device "nvmf_init_br2" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:22.549 Cannot find device "nvmf_tgt_br" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:22.549 Cannot find device "nvmf_tgt_br2" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:22.549 Cannot find device "nvmf_br" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:22.549 Cannot find device "nvmf_init_if" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:22.549 Cannot find device "nvmf_init_if2" 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:22.549 13:40:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:22.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:22.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:21:22.809 00:21:22.809 --- 10.0.0.3 ping statistics --- 00:21:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.809 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:22.809 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:22.809 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:21:22.809 00:21:22.809 --- 10.0.0.4 ping statistics --- 00:21:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.809 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:22.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:22.809 00:21:22.809 --- 10.0.0.1 ping statistics --- 00:21:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.809 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:22.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:21:22.809 00:21:22.809 --- 10.0.0.2 ping statistics --- 00:21:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.809 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:22.809 13:40:34 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:23.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.068 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:23.068 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.343 13:40:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:23.343 13:40:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83476 00:21:23.343 13:40:35 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83476 00:21:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83476 ']' 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.343 13:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.343 [2024-11-20 13:40:35.133413] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:23.343 [2024-11-20 13:40:35.133778] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.343 [2024-11-20 13:40:35.284736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.602 [2024-11-20 13:40:35.345408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.602 [2024-11-20 13:40:35.345476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.602 [2024-11-20 13:40:35.345489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.602 [2024-11-20 13:40:35.345497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.602 [2024-11-20 13:40:35.345505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.602 [2024-11-20 13:40:35.345911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.602 [2024-11-20 13:40:35.400291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:23.602 13:40:35 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.602 13:40:35 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.602 13:40:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:23.602 13:40:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.602 [2024-11-20 13:40:35.516044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.602 13:40:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.602 ************************************ 00:21:23.602 START TEST fio_dif_1_default 00:21:23.602 ************************************ 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:23.602 bdev_null0 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.602 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:23.862 [2024-11-20 13:40:35.564213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.862 { 00:21:23.862 "params": { 00:21:23.862 "name": "Nvme$subsystem", 00:21:23.862 "trtype": "$TEST_TRANSPORT", 00:21:23.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.862 "adrfam": "ipv4", 00:21:23.862 "trsvcid": "$NVMF_PORT", 00:21:23.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.862 "hdgst": ${hdgst:-false}, 00:21:23.862 "ddgst": ${ddgst:-false} 00:21:23.862 }, 00:21:23.862 "method": "bdev_nvme_attach_controller" 00:21:23.862 } 00:21:23.862 EOF 00:21:23.862 )") 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:23.862 "params": { 00:21:23.862 "name": "Nvme0", 00:21:23.862 "trtype": "tcp", 00:21:23.862 "traddr": "10.0.0.3", 00:21:23.862 "adrfam": "ipv4", 00:21:23.862 "trsvcid": "4420", 00:21:23.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.862 "hdgst": false, 00:21:23.862 "ddgst": false 00:21:23.862 }, 00:21:23.862 "method": "bdev_nvme_attach_controller" 00:21:23.862 }' 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:23.862 13:40:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.862 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:23.862 fio-3.35 00:21:23.862 Starting 1 thread 00:21:36.066 00:21:36.066 filename0: (groupid=0, jobs=1): err= 0: pid=83535: Wed Nov 20 13:40:46 2024 00:21:36.066 read: IOPS=8291, BW=32.4MiB/s (34.0MB/s)(324MiB/10001msec) 00:21:36.066 slat (usec): min=6, max=893, avg= 8.86, stdev= 3.98 00:21:36.066 clat (usec): min=358, max=1802, avg=456.41, stdev=39.28 00:21:36.066 lat (usec): min=365, max=1813, avg=465.27, stdev=39.98 00:21:36.066 clat percentiles (usec): 00:21:36.066 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 433], 00:21:36.066 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:21:36.066 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 523], 00:21:36.066 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 709], 99.95th=[ 766], 00:21:36.066 | 99.99th=[ 1500] 00:21:36.066 bw ( KiB/s): min=31168, max=33952, per=100.00%, avg=33333.89, stdev=832.48, samples=19 00:21:36.066 iops : min= 7792, max= 8488, avg=8333.47, stdev=208.12, samples=19 00:21:36.066 lat (usec) : 500=89.98%, 750=9.97%, 1000=0.02% 00:21:36.066 lat (msec) : 2=0.03% 00:21:36.066 cpu : usr=84.58%, sys=13.52%, ctx=33, majf=0, minf=9 00:21:36.066 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.066 issued rwts: total=82928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.066 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:36.066 00:21:36.066 Run status group 0 (all jobs): 00:21:36.066 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=324MiB (340MB), run=10001-10001msec 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.066 ************************************ 00:21:36.066 END TEST fio_dif_1_default 00:21:36.066 ************************************ 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.066 00:21:36.066 real 0m11.107s 00:21:36.066 user 0m9.182s 00:21:36.066 sys 0m1.639s 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:36.066 13:40:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:36.066 13:40:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:36.066 13:40:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.066 13:40:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:36.066 ************************************ 00:21:36.066 START TEST fio_dif_1_multi_subsystems 00:21:36.066 ************************************ 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.066 bdev_null0 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.066 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 [2024-11-20 13:40:46.725861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 bdev_null1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.067 { 00:21:36.067 "params": { 00:21:36.067 "name": "Nvme$subsystem", 00:21:36.067 "trtype": "$TEST_TRANSPORT", 00:21:36.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.067 "adrfam": "ipv4", 00:21:36.067 "trsvcid": "$NVMF_PORT", 00:21:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.067 "hdgst": ${hdgst:-false}, 00:21:36.067 "ddgst": ${ddgst:-false} 00:21:36.067 }, 00:21:36.067 "method": "bdev_nvme_attach_controller" 00:21:36.067 } 00:21:36.067 EOF 00:21:36.067 )") 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.067 { 00:21:36.067 "params": { 00:21:36.067 "name": "Nvme$subsystem", 00:21:36.067 "trtype": "$TEST_TRANSPORT", 00:21:36.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.067 "adrfam": "ipv4", 00:21:36.067 "trsvcid": "$NVMF_PORT", 00:21:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.067 "hdgst": ${hdgst:-false}, 00:21:36.067 "ddgst": ${ddgst:-false} 00:21:36.067 }, 00:21:36.067 "method": "bdev_nvme_attach_controller" 00:21:36.067 } 00:21:36.067 EOF 00:21:36.067 )") 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.067 "params": { 00:21:36.067 "name": "Nvme0", 00:21:36.067 "trtype": "tcp", 00:21:36.067 "traddr": "10.0.0.3", 00:21:36.067 "adrfam": "ipv4", 00:21:36.067 "trsvcid": "4420", 00:21:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:36.067 "hdgst": false, 00:21:36.067 "ddgst": false 00:21:36.067 }, 00:21:36.067 "method": "bdev_nvme_attach_controller" 00:21:36.067 },{ 00:21:36.067 "params": { 00:21:36.067 "name": "Nvme1", 00:21:36.067 "trtype": "tcp", 00:21:36.067 "traddr": "10.0.0.3", 00:21:36.067 "adrfam": "ipv4", 00:21:36.067 "trsvcid": "4420", 00:21:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.067 "hdgst": false, 00:21:36.067 "ddgst": false 00:21:36.067 }, 00:21:36.067 "method": "bdev_nvme_attach_controller" 00:21:36.067 }' 00:21:36.067 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:36.068 13:40:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.068 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:36.068 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:36.068 fio-3.35 00:21:36.068 Starting 2 threads 00:21:46.037 00:21:46.037 filename0: (groupid=0, jobs=1): err= 0: pid=83695: Wed Nov 20 13:40:57 2024 00:21:46.037 read: IOPS=4496, BW=17.6MiB/s (18.4MB/s)(176MiB/10001msec) 00:21:46.037 slat (nsec): min=6361, max=97112, avg=14727.54, stdev=5612.58 00:21:46.037 clat (usec): min=455, max=3733, avg=849.32, stdev=69.35 00:21:46.037 lat (usec): min=464, max=3759, avg=864.05, stdev=71.24 00:21:46.037 clat percentiles (usec): 00:21:46.037 | 1.00th=[ 775], 5.00th=[ 783], 10.00th=[ 791], 20.00th=[ 807], 00:21:46.037 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[ 832], 60.00th=[ 848], 00:21:46.037 | 70.00th=[ 865], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 947], 00:21:46.037 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1336], 99.95th=[ 1844], 00:21:46.037 | 99.99th=[ 2073] 00:21:46.037 bw ( KiB/s): min=15488, max=18560, per=50.27%, avg=18078.53, stdev=800.19, samples=19 00:21:46.037 iops : min= 3872, max= 4640, avg=4519.63, stdev=200.05, samples=19 00:21:46.037 lat (usec) : 500=0.03%, 750=0.08%, 1000=97.63% 00:21:46.037 lat (msec) : 2=2.24%, 4=0.02% 00:21:46.037 cpu : usr=90.85%, sys=7.83%, ctx=86, majf=0, minf=0 00:21:46.037 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.037 issued rwts: total=44972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.037 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:46.037 filename1: (groupid=0, jobs=1): err= 0: pid=83696: Wed Nov 20 13:40:57 2024 00:21:46.037 read: IOPS=4493, BW=17.6MiB/s (18.4MB/s)(176MiB/10001msec) 00:21:46.037 slat (usec): min=4, max=283, avg=14.98, stdev= 6.32 00:21:46.037 clat (usec): min=604, max=4803, avg=849.38, stdev=80.36 00:21:46.037 lat (usec): min=612, max=4836, avg=864.36, stdev=82.91 00:21:46.037 clat percentiles (usec): 00:21:46.037 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 775], 20.00th=[ 799], 00:21:46.037 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 857], 00:21:46.037 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 963], 00:21:46.037 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1434], 99.95th=[ 1860], 00:21:46.037 | 99.99th=[ 2114] 00:21:46.037 bw ( KiB/s): min=15488, max=18560, per=50.24%, avg=18066.53, stdev=798.39, samples=19 00:21:46.037 iops : min= 3872, max= 4640, avg=4516.63, stdev=199.60, samples=19 00:21:46.037 lat (usec) : 750=4.21%, 1000=93.30% 00:21:46.037 lat (msec) : 2=2.47%, 4=0.01%, 10=0.01% 00:21:46.037 cpu : usr=90.71%, sys=7.92%, ctx=57, majf=0, minf=0 00:21:46.037 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.037 issued rwts: total=44944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.037 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:46.037 00:21:46.037 Run status group 0 (all jobs): 00:21:46.037 READ: bw=35.1MiB/s (36.8MB/s), 17.6MiB/s-17.6MiB/s (18.4MB/s-18.4MB/s), io=351MiB (368MB), run=10001-10001msec 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.037 ************************************ 00:21:46.037 END TEST fio_dif_1_multi_subsystems 00:21:46.037 ************************************ 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.037 00:21:46.037 real 0m11.255s 00:21:46.037 user 0m18.996s 00:21:46.037 sys 0m1.922s 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.037 13:40:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 13:40:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:46.298 13:40:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.298 13:40:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.298 13:40:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 ************************************ 00:21:46.298 START TEST fio_dif_rand_params 00:21:46.298 ************************************ 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 bdev_null0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:46.298 [2024-11-20 13:40:58.037310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:46.298 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.298 { 00:21:46.298 "params": { 00:21:46.298 "name": "Nvme$subsystem", 00:21:46.298 "trtype": "$TEST_TRANSPORT", 00:21:46.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.299 "adrfam": "ipv4", 00:21:46.299 "trsvcid": "$NVMF_PORT", 00:21:46.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.299 "hdgst": ${hdgst:-false}, 00:21:46.299 "ddgst": ${ddgst:-false} 00:21:46.299 }, 00:21:46.299 "method": "bdev_nvme_attach_controller" 00:21:46.299 } 00:21:46.299 EOF 00:21:46.299 )") 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:46.299 "params": { 00:21:46.299 "name": "Nvme0", 00:21:46.299 "trtype": "tcp", 00:21:46.299 "traddr": "10.0.0.3", 00:21:46.299 "adrfam": "ipv4", 00:21:46.299 "trsvcid": "4420", 00:21:46.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:46.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:46.299 "hdgst": false, 00:21:46.299 "ddgst": false 00:21:46.299 }, 00:21:46.299 "method": "bdev_nvme_attach_controller" 00:21:46.299 }' 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:46.299 13:40:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:46.558 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:46.558 ... 00:21:46.558 fio-3.35 00:21:46.558 Starting 3 threads 00:21:53.178 00:21:53.178 filename0: (groupid=0, jobs=1): err= 0: pid=83852: Wed Nov 20 13:41:03 2024 00:21:53.178 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5002msec) 00:21:53.178 slat (nsec): min=5460, max=48444, avg=15378.50, stdev=3172.70 00:21:53.178 clat (usec): min=10128, max=15240, avg=12562.33, stdev=547.35 00:21:53.178 lat (usec): min=10151, max=15256, avg=12577.71, stdev=547.89 00:21:53.178 clat percentiles (usec): 00:21:53.178 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:21:53.178 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:21:53.178 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13304], 00:21:53.178 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15270], 99.95th=[15270], 00:21:53.178 | 99.99th=[15270] 00:21:53.178 bw ( KiB/s): min=28416, max=31488, per=33.24%, avg=30378.67, stdev=1024.00, samples=9 00:21:53.178 iops : min= 222, max= 246, avg=237.33, stdev= 8.00, samples=9 00:21:53.178 lat (msec) : 20=100.00% 00:21:53.178 cpu : usr=90.68%, sys=8.70%, ctx=10, majf=0, minf=0 00:21:53.178 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.178 filename0: (groupid=0, jobs=1): err= 0: pid=83853: Wed Nov 20 13:41:03 2024 00:21:53.178 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5004msec) 00:21:53.178 slat (nsec): min=5589, max=62034, avg=14567.54, stdev=8301.85 00:21:53.178 clat (usec): min=11154, max=15548, avg=12564.32, stdev=536.45 00:21:53.178 lat (usec): min=11163, max=15575, avg=12578.88, stdev=537.57 00:21:53.178 clat percentiles (usec): 00:21:53.178 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:21:53.178 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:21:53.178 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13304], 00:21:53.178 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15533], 99.95th=[15533], 00:21:53.178 | 99.99th=[15533] 00:21:53.178 bw ( KiB/s): min=28416, max=31488, per=33.24%, avg=30378.67, stdev=1024.00, samples=9 00:21:53.178 iops : min= 222, max= 246, avg=237.33, stdev= 8.00, samples=9 00:21:53.178 lat (msec) : 20=100.00% 00:21:53.178 cpu : usr=89.33%, sys=9.59%, ctx=40, majf=0, minf=0 00:21:53.178 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.178 filename0: (groupid=0, jobs=1): err= 0: pid=83854: Wed Nov 20 13:41:03 2024 00:21:53.178 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5002msec) 00:21:53.178 slat (nsec): min=7319, max=36836, avg=15525.74, stdev=3196.97 00:21:53.178 clat (usec): min=10127, max=15285, avg=12561.93, stdev=548.16 00:21:53.178 lat (usec): min=10151, max=15310, avg=12577.45, stdev=548.72 00:21:53.178 clat percentiles (usec): 00:21:53.178 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:21:53.178 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:21:53.178 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13304], 95.00th=[13304], 00:21:53.178 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15270], 99.95th=[15270], 00:21:53.178 | 99.99th=[15270] 00:21:53.178 bw ( KiB/s): min=28416, max=31488, per=33.24%, avg=30378.67, stdev=1024.00, samples=9 00:21:53.178 iops : min= 222, max= 246, avg=237.33, stdev= 8.00, samples=9 00:21:53.178 lat (msec) : 20=100.00% 00:21:53.178 cpu : usr=91.16%, sys=8.20%, ctx=5, majf=0, minf=0 00:21:53.178 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.178 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.178 00:21:53.178 Run status group 0 (all jobs): 00:21:53.178 READ: bw=89.3MiB/s (93.6MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=447MiB (468MB), run=5002-5004msec 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:53.178 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 bdev_null0 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 [2024-11-20 13:41:04.107005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 bdev_null1 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 bdev_null2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.179 { 00:21:53.179 "params": { 00:21:53.179 "name": "Nvme$subsystem", 00:21:53.179 "trtype": "$TEST_TRANSPORT", 00:21:53.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.179 "adrfam": "ipv4", 00:21:53.179 "trsvcid": "$NVMF_PORT", 00:21:53.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.179 "hdgst": ${hdgst:-false}, 00:21:53.179 "ddgst": ${ddgst:-false} 00:21:53.179 }, 00:21:53.179 "method": "bdev_nvme_attach_controller" 00:21:53.179 } 00:21:53.179 EOF 00:21:53.179 )") 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.179 { 00:21:53.179 "params": { 00:21:53.179 "name": "Nvme$subsystem", 00:21:53.179 "trtype": "$TEST_TRANSPORT", 00:21:53.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.179 "adrfam": "ipv4", 00:21:53.179 "trsvcid": "$NVMF_PORT", 00:21:53.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.179 "hdgst": ${hdgst:-false}, 00:21:53.179 "ddgst": ${ddgst:-false} 00:21:53.179 }, 00:21:53.179 "method": "bdev_nvme_attach_controller" 00:21:53.179 } 00:21:53.179 EOF 00:21:53.179 )") 00:21:53.179 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.180 { 00:21:53.180 "params": { 00:21:53.180 "name": "Nvme$subsystem", 00:21:53.180 "trtype": "$TEST_TRANSPORT", 00:21:53.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.180 "adrfam": "ipv4", 00:21:53.180 "trsvcid": "$NVMF_PORT", 00:21:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.180 "hdgst": ${hdgst:-false}, 00:21:53.180 "ddgst": ${ddgst:-false} 00:21:53.180 }, 00:21:53.180 "method": "bdev_nvme_attach_controller" 00:21:53.180 } 00:21:53.180 EOF 00:21:53.180 )") 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:53.180 "params": { 00:21:53.180 "name": "Nvme0", 00:21:53.180 "trtype": "tcp", 00:21:53.180 "traddr": "10.0.0.3", 00:21:53.180 "adrfam": "ipv4", 00:21:53.180 "trsvcid": "4420", 00:21:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.180 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.180 "hdgst": false, 00:21:53.180 "ddgst": false 00:21:53.180 }, 00:21:53.180 "method": "bdev_nvme_attach_controller" 00:21:53.180 },{ 00:21:53.180 "params": { 00:21:53.180 "name": "Nvme1", 00:21:53.180 "trtype": "tcp", 00:21:53.180 "traddr": "10.0.0.3", 00:21:53.180 "adrfam": "ipv4", 00:21:53.180 "trsvcid": "4420", 00:21:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.180 "hdgst": false, 00:21:53.180 "ddgst": false 00:21:53.180 }, 00:21:53.180 "method": "bdev_nvme_attach_controller" 00:21:53.180 },{ 00:21:53.180 "params": { 00:21:53.180 "name": "Nvme2", 00:21:53.180 "trtype": "tcp", 00:21:53.180 "traddr": "10.0.0.3", 00:21:53.180 "adrfam": "ipv4", 00:21:53.180 "trsvcid": "4420", 00:21:53.180 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:53.180 "hdgst": false, 00:21:53.180 "ddgst": false 00:21:53.180 }, 00:21:53.180 "method": "bdev_nvme_attach_controller" 00:21:53.180 }' 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:53.180 13:41:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:53.180 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:53.180 ... 00:21:53.180 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:53.180 ... 00:21:53.180 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:53.180 ... 00:21:53.180 fio-3.35 00:21:53.180 Starting 24 threads 00:22:05.410 00:22:05.410 filename0: (groupid=0, jobs=1): err= 0: pid=83949: Wed Nov 20 13:41:15 2024 00:22:05.410 read: IOPS=205, BW=824KiB/s (843kB/s)(8248KiB/10015msec) 00:22:05.410 slat (usec): min=5, max=8023, avg=19.06, stdev=176.44 00:22:05.410 clat (msec): min=14, max=156, avg=77.63, stdev=28.15 00:22:05.410 lat (msec): min=14, max=156, avg=77.65, stdev=28.16 00:22:05.410 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 51], 00:22:05.411 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:22:05.411 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 131], 95.00th=[ 132], 00:22:05.411 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.411 | 99.99th=[ 157] 00:22:05.411 bw ( KiB/s): min= 560, max= 1320, per=4.41%, avg=826.53, stdev=216.58, samples=19 00:22:05.411 iops : min= 140, max= 330, avg=206.63, stdev=54.14, samples=19 00:22:05.411 lat (msec) : 20=0.48%, 50=19.64%, 100=63.43%, 250=16.44% 00:22:05.411 cpu : usr=31.44%, sys=1.71%, ctx=841, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83950: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=200, BW=802KiB/s (821kB/s)(8040KiB/10023msec) 00:22:05.411 slat (usec): min=5, max=8029, avg=25.59, stdev=271.82 00:22:05.411 clat (msec): min=24, max=153, avg=79.60, stdev=27.05 00:22:05.411 lat (msec): min=24, max=153, avg=79.63, stdev=27.04 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 55], 00:22:05.411 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:22:05.411 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.411 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:22:05.411 | 99.99th=[ 155] 00:22:05.411 bw ( KiB/s): min= 560, max= 1240, per=4.27%, avg=800.40, stdev=191.04, samples=20 00:22:05.411 iops : min= 140, max= 310, avg=200.10, stdev=47.76, samples=20 00:22:05.411 lat (msec) : 50=16.32%, 100=65.52%, 250=18.16% 00:22:05.411 cpu : usr=39.64%, sys=2.18%, ctx=1203, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83951: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=197, BW=789KiB/s (808kB/s)(7892KiB/10003msec) 00:22:05.411 slat (usec): min=4, max=8025, avg=20.78, stdev=201.75 00:22:05.411 clat (msec): min=2, max=154, avg=81.02, stdev=27.56 00:22:05.411 lat (msec): min=2, max=154, avg=81.04, stdev=27.55 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 57], 00:22:05.411 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 84], 00:22:05.411 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 128], 95.00th=[ 134], 00:22:05.411 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:22:05.411 | 99.99th=[ 155] 00:22:05.411 bw ( KiB/s): min= 560, max= 1152, per=4.18%, avg=784.74, stdev=175.68, samples=19 00:22:05.411 iops : min= 140, max= 288, avg=196.16, stdev=43.89, samples=19 00:22:05.411 lat (msec) : 4=0.81%, 20=0.30%, 50=11.35%, 100=69.84%, 250=17.69% 00:22:05.411 cpu : usr=50.21%, sys=2.43%, ctx=1378, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=88.4%, 8=10.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83952: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=195, BW=783KiB/s (802kB/s)(7884KiB/10066msec) 00:22:05.411 slat (usec): min=4, max=4033, avg=18.82, stdev=128.06 00:22:05.411 clat (msec): min=16, max=166, avg=81.51, stdev=29.65 00:22:05.411 lat (msec): min=16, max=166, avg=81.53, stdev=29.65 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 59], 00:22:05.411 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 84], 00:22:05.411 | 70.00th=[ 91], 80.00th=[ 109], 90.00th=[ 129], 95.00th=[ 134], 00:22:05.411 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 167], 00:22:05.411 | 99.99th=[ 167] 00:22:05.411 bw ( KiB/s): min= 536, max= 1648, per=4.17%, avg=781.70, stdev=255.21, samples=20 00:22:05.411 iops : min= 134, max= 412, avg=195.40, stdev=63.79, samples=20 00:22:05.411 lat (msec) : 20=2.33%, 50=13.14%, 100=62.81%, 250=21.71% 00:22:05.411 cpu : usr=36.47%, sys=1.68%, ctx=1043, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83953: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=202, BW=811KiB/s (831kB/s)(8120KiB/10009msec) 00:22:05.411 slat (usec): min=4, max=8026, avg=18.20, stdev=177.90 00:22:05.411 clat (msec): min=12, max=154, avg=78.79, stdev=27.15 00:22:05.411 lat (msec): min=12, max=154, avg=78.81, stdev=27.16 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:22:05.411 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:22:05.411 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 128], 95.00th=[ 132], 00:22:05.411 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:22:05.411 | 99.99th=[ 155] 00:22:05.411 bw ( KiB/s): min= 512, max= 1080, per=4.33%, avg=812.63, stdev=190.16, samples=19 00:22:05.411 iops : min= 128, max= 270, avg=203.16, stdev=47.54, samples=19 00:22:05.411 lat (msec) : 20=0.34%, 50=16.90%, 100=65.52%, 250=17.24% 00:22:05.411 cpu : usr=36.24%, sys=1.61%, ctx=1152, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=83.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83954: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=180, BW=720KiB/s (738kB/s)(7272KiB/10096msec) 00:22:05.411 slat (usec): min=5, max=4048, avg=20.43, stdev=136.52 00:22:05.411 clat (msec): min=2, max=199, avg=88.45, stdev=36.30 00:22:05.411 lat (msec): min=2, max=199, avg=88.47, stdev=36.29 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 40], 20.00th=[ 73], 00:22:05.411 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 89], 00:22:05.411 | 70.00th=[ 102], 80.00th=[ 123], 90.00th=[ 134], 95.00th=[ 144], 00:22:05.411 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 201], 00:22:05.411 | 99.99th=[ 201] 00:22:05.411 bw ( KiB/s): min= 400, max= 2131, per=3.84%, avg=719.80, stdev=361.50, samples=20 00:22:05.411 iops : min= 100, max= 532, avg=179.90, stdev=90.22, samples=20 00:22:05.411 lat (msec) : 4=2.15%, 10=4.35%, 20=0.33%, 50=5.28%, 100=57.15% 00:22:05.411 lat (msec) : 250=30.75% 00:22:05.411 cpu : usr=40.51%, sys=2.09%, ctx=1393, majf=0, minf=0 00:22:05.411 IO depths : 1=0.4%, 2=5.6%, 4=21.0%, 8=60.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=93.3%, 8=2.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=1818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83955: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=181, BW=724KiB/s (742kB/s)(7252KiB/10011msec) 00:22:05.411 slat (usec): min=5, max=16032, avg=27.50, stdev=397.76 00:22:05.411 clat (msec): min=17, max=167, avg=88.19, stdev=25.29 00:22:05.411 lat (msec): min=17, max=167, avg=88.22, stdev=25.27 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 71], 00:22:05.411 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 87], 00:22:05.411 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 130], 95.00th=[ 132], 00:22:05.411 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 169], 00:22:05.411 | 99.99th=[ 169] 00:22:05.411 bw ( KiB/s): min= 512, max= 1152, per=3.87%, avg=725.89, stdev=157.08, samples=19 00:22:05.411 iops : min= 128, max= 288, avg=181.47, stdev=39.27, samples=19 00:22:05.411 lat (msec) : 20=0.11%, 50=6.34%, 100=68.89%, 250=24.66% 00:22:05.411 cpu : usr=33.85%, sys=1.55%, ctx=1043, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:22:05.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.411 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.411 filename0: (groupid=0, jobs=1): err= 0: pid=83956: Wed Nov 20 13:41:15 2024 00:22:05.411 read: IOPS=195, BW=783KiB/s (802kB/s)(7852KiB/10027msec) 00:22:05.411 slat (usec): min=3, max=3048, avg=16.20, stdev=68.68 00:22:05.411 clat (msec): min=18, max=155, avg=81.61, stdev=26.17 00:22:05.411 lat (msec): min=18, max=155, avg=81.62, stdev=26.17 00:22:05.411 clat percentiles (msec): 00:22:05.411 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:22:05.411 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 85], 00:22:05.411 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.411 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.411 | 99.99th=[ 157] 00:22:05.411 bw ( KiB/s): min= 560, max= 1136, per=4.15%, avg=778.90, stdev=176.98, samples=20 00:22:05.411 iops : min= 140, max= 284, avg=194.70, stdev=44.21, samples=20 00:22:05.411 lat (msec) : 20=0.10%, 50=12.68%, 100=69.18%, 250=18.03% 00:22:05.411 cpu : usr=31.85%, sys=1.79%, ctx=1073, majf=0, minf=9 00:22:05.411 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83957: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=203, BW=815KiB/s (835kB/s)(8156KiB/10006msec) 00:22:05.412 slat (usec): min=5, max=8025, avg=21.15, stdev=177.64 00:22:05.412 clat (msec): min=10, max=155, avg=78.41, stdev=27.69 00:22:05.412 lat (msec): min=10, max=155, avg=78.43, stdev=27.69 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 54], 00:22:05.412 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:22:05.412 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 128], 95.00th=[ 132], 00:22:05.412 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.412 | 99.99th=[ 157] 00:22:05.412 bw ( KiB/s): min= 560, max= 1182, per=4.36%, avg=818.89, stdev=200.46, samples=19 00:22:05.412 iops : min= 140, max= 295, avg=204.68, stdev=50.05, samples=19 00:22:05.412 lat (msec) : 20=0.29%, 50=17.90%, 100=65.03%, 250=16.77% 00:22:05.412 cpu : usr=34.99%, sys=1.76%, ctx=1053, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83958: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=195, BW=782KiB/s (800kB/s)(7820KiB/10004msec) 00:22:05.412 slat (usec): min=4, max=4030, avg=20.13, stdev=150.56 00:22:05.412 clat (msec): min=14, max=155, avg=81.77, stdev=27.05 00:22:05.412 lat (msec): min=14, max=155, avg=81.79, stdev=27.05 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 27], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 58], 00:22:05.412 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 84], 00:22:05.412 | 70.00th=[ 89], 80.00th=[ 100], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.412 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.412 | 99.99th=[ 157] 00:22:05.412 bw ( KiB/s): min= 560, max= 1131, per=4.18%, avg=783.42, stdev=172.65, samples=19 00:22:05.412 iops : min= 140, max= 282, avg=195.79, stdev=43.04, samples=19 00:22:05.412 lat (msec) : 20=0.31%, 50=11.82%, 100=69.36%, 250=18.52% 00:22:05.412 cpu : usr=38.18%, sys=1.94%, ctx=1272, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=1.5%, 4=5.7%, 8=77.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=88.2%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83959: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=198, BW=793KiB/s (812kB/s)(7952KiB/10033msec) 00:22:05.412 slat (nsec): min=4984, max=57477, avg=14932.24, stdev=5631.80 00:22:05.412 clat (msec): min=23, max=156, avg=80.64, stdev=26.51 00:22:05.412 lat (msec): min=23, max=156, avg=80.66, stdev=26.51 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:22:05.412 | 30.00th=[ 65], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 84], 00:22:05.412 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 127], 95.00th=[ 132], 00:22:05.412 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.412 | 99.99th=[ 157] 00:22:05.412 bw ( KiB/s): min= 560, max= 1024, per=4.20%, avg=788.80, stdev=172.82, samples=20 00:22:05.412 iops : min= 140, max= 256, avg=197.20, stdev=43.21, samples=20 00:22:05.412 lat (msec) : 50=12.42%, 100=69.57%, 250=18.01% 00:22:05.412 cpu : usr=39.59%, sys=1.87%, ctx=1191, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83960: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=207, BW=832KiB/s (852kB/s)(8388KiB/10083msec) 00:22:05.412 slat (usec): min=5, max=6747, avg=23.91, stdev=181.08 00:22:05.412 clat (usec): min=1628, max=177660, avg=76618.87, stdev=35636.44 00:22:05.412 lat (usec): min=1638, max=177681, avg=76642.78, stdev=35643.23 00:22:05.412 clat percentiles (usec): 00:22:05.412 | 1.00th=[ 1745], 5.00th=[ 4555], 10.00th=[ 23725], 20.00th=[ 50070], 00:22:05.412 | 30.00th=[ 62653], 40.00th=[ 71828], 50.00th=[ 79168], 60.00th=[ 83362], 00:22:05.412 | 70.00th=[ 88605], 80.00th=[105382], 90.00th=[129500], 95.00th=[133694], 00:22:05.412 | 99.00th=[145753], 99.50th=[149947], 99.90th=[166724], 99.95th=[166724], 00:22:05.412 | 99.99th=[177210] 00:22:05.412 bw ( KiB/s): min= 512, max= 2671, per=4.43%, avg=831.15, stdev=464.36, samples=20 00:22:05.412 iops : min= 128, max= 667, avg=207.75, stdev=115.93, samples=20 00:22:05.412 lat (msec) : 2=1.53%, 4=1.86%, 10=4.15%, 20=1.43%, 50=11.25% 00:22:05.412 lat (msec) : 100=57.56%, 250=22.22% 00:22:05.412 cpu : usr=45.80%, sys=2.21%, ctx=1436, majf=0, minf=0 00:22:05.412 IO depths : 1=0.3%, 2=1.5%, 4=5.1%, 8=77.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83961: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=198, BW=793KiB/s (812kB/s)(7972KiB/10058msec) 00:22:05.412 slat (usec): min=5, max=8030, avg=35.57, stdev=400.89 00:22:05.412 clat (msec): min=21, max=167, avg=80.50, stdev=29.36 00:22:05.412 lat (msec): min=21, max=167, avg=80.54, stdev=29.36 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 51], 00:22:05.412 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:22:05.412 | 70.00th=[ 86], 80.00th=[ 107], 90.00th=[ 130], 95.00th=[ 132], 00:22:05.412 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:22:05.412 | 99.99th=[ 169] 00:22:05.412 bw ( KiB/s): min= 488, max= 1534, per=4.21%, avg=790.00, stdev=248.57, samples=20 00:22:05.412 iops : min= 122, max= 383, avg=197.45, stdev=62.05, samples=20 00:22:05.412 lat (msec) : 50=19.37%, 100=59.16%, 250=21.48% 00:22:05.412 cpu : usr=31.63%, sys=1.60%, ctx=849, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83962: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=199, BW=799KiB/s (818kB/s)(7992KiB/10002msec) 00:22:05.412 slat (usec): min=3, max=7042, avg=21.63, stdev=181.20 00:22:05.412 clat (usec): min=1777, max=159993, avg=79990.26, stdev=28892.93 00:22:05.412 lat (usec): min=1784, max=160004, avg=80011.89, stdev=28892.10 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:22:05.412 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:22:05.412 | 70.00th=[ 87], 80.00th=[ 100], 90.00th=[ 128], 95.00th=[ 132], 00:22:05.412 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:22:05.412 | 99.99th=[ 161] 00:22:05.412 bw ( KiB/s): min= 560, max= 1152, per=4.21%, avg=789.05, stdev=186.71, samples=19 00:22:05.412 iops : min= 140, max= 288, avg=197.26, stdev=46.68, samples=19 00:22:05.412 lat (msec) : 2=0.80%, 4=0.80%, 20=0.30%, 50=11.16%, 100=67.42% 00:22:05.412 lat (msec) : 250=19.52% 00:22:05.412 cpu : usr=37.95%, sys=1.83%, ctx=1226, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=78.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83963: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=178, BW=714KiB/s (731kB/s)(7172KiB/10049msec) 00:22:05.412 slat (usec): min=6, max=8026, avg=25.76, stdev=250.36 00:22:05.412 clat (msec): min=17, max=169, avg=89.37, stdev=27.30 00:22:05.412 lat (msec): min=17, max=169, avg=89.40, stdev=27.30 00:22:05.412 clat percentiles (msec): 00:22:05.412 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 71], 20.00th=[ 73], 00:22:05.412 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:22:05.412 | 70.00th=[ 96], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 133], 00:22:05.412 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:22:05.412 | 99.99th=[ 169] 00:22:05.412 bw ( KiB/s): min= 528, max= 1394, per=3.79%, avg=710.90, stdev=194.31, samples=20 00:22:05.412 iops : min= 132, max= 348, avg=177.70, stdev=48.49, samples=20 00:22:05.412 lat (msec) : 20=0.11%, 50=6.97%, 100=67.04%, 250=25.88% 00:22:05.412 cpu : usr=33.59%, sys=1.60%, ctx=928, majf=0, minf=9 00:22:05.412 IO depths : 1=0.1%, 2=4.0%, 4=16.2%, 8=66.1%, 16=13.7%, 32=0.0%, >=64=0.0% 00:22:05.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.412 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.412 filename1: (groupid=0, jobs=1): err= 0: pid=83964: Wed Nov 20 13:41:15 2024 00:22:05.412 read: IOPS=206, BW=827KiB/s (847kB/s)(8324KiB/10064msec) 00:22:05.412 slat (usec): min=4, max=8022, avg=22.76, stdev=215.30 00:22:05.412 clat (msec): min=4, max=159, avg=77.09, stdev=31.80 00:22:05.413 lat (msec): min=4, max=159, avg=77.11, stdev=31.80 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 51], 00:22:05.413 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:22:05.413 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 129], 95.00th=[ 134], 00:22:05.413 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 157], 00:22:05.413 | 99.99th=[ 161] 00:22:05.413 bw ( KiB/s): min= 512, max= 2080, per=4.42%, avg=828.70, stdev=338.22, samples=20 00:22:05.413 iops : min= 128, max= 520, avg=207.15, stdev=84.54, samples=20 00:22:05.413 lat (msec) : 10=2.84%, 20=1.68%, 50=14.80%, 100=61.51%, 250=19.17% 00:22:05.413 cpu : usr=38.65%, sys=2.27%, ctx=1240, majf=0, minf=0 00:22:05.413 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83965: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=198, BW=792KiB/s (811kB/s)(7976KiB/10065msec) 00:22:05.413 slat (usec): min=5, max=12020, avg=32.85, stdev=401.87 00:22:05.413 clat (msec): min=11, max=179, avg=80.46, stdev=32.09 00:22:05.413 lat (msec): min=11, max=179, avg=80.49, stdev=32.10 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 43], 20.00th=[ 52], 00:22:05.413 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:22:05.413 | 70.00th=[ 92], 80.00th=[ 112], 90.00th=[ 130], 95.00th=[ 133], 00:22:05.413 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 180], 00:22:05.413 | 99.99th=[ 180] 00:22:05.413 bw ( KiB/s): min= 504, max= 1904, per=4.23%, avg=793.25, stdev=316.19, samples=20 00:22:05.413 iops : min= 126, max= 476, avg=198.30, stdev=79.04, samples=20 00:22:05.413 lat (msec) : 20=3.81%, 50=14.19%, 100=59.08%, 250=22.92% 00:22:05.413 cpu : usr=33.26%, sys=1.57%, ctx=951, majf=0, minf=9 00:22:05.413 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83966: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=201, BW=805KiB/s (824kB/s)(8092KiB/10051msec) 00:22:05.413 slat (usec): min=7, max=4036, avg=27.97, stdev=219.79 00:22:05.413 clat (msec): min=17, max=167, avg=79.25, stdev=29.16 00:22:05.413 lat (msec): min=17, max=167, avg=79.27, stdev=29.16 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 47], 20.00th=[ 55], 00:22:05.413 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:22:05.413 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.413 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 157], 00:22:05.413 | 99.99th=[ 169] 00:22:05.413 bw ( KiB/s): min= 536, max= 1539, per=4.28%, avg=802.95, stdev=241.12, samples=20 00:22:05.413 iops : min= 134, max= 384, avg=200.70, stdev=60.16, samples=20 00:22:05.413 lat (msec) : 20=1.38%, 50=14.24%, 100=65.35%, 250=19.03% 00:22:05.413 cpu : usr=45.13%, sys=2.10%, ctx=1436, majf=0, minf=9 00:22:05.413 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83967: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=183, BW=735KiB/s (753kB/s)(7388KiB/10052msec) 00:22:05.413 slat (usec): min=6, max=8031, avg=34.51, stdev=396.38 00:22:05.413 clat (msec): min=16, max=163, avg=86.79, stdev=26.01 00:22:05.413 lat (msec): min=16, max=163, avg=86.83, stdev=26.03 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 72], 00:22:05.413 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 87], 00:22:05.413 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.413 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 163], 00:22:05.413 | 99.99th=[ 163] 00:22:05.413 bw ( KiB/s): min= 560, max= 1288, per=3.90%, avg=732.40, stdev=172.59, samples=20 00:22:05.413 iops : min= 140, max= 322, avg=183.10, stdev=43.15, samples=20 00:22:05.413 lat (msec) : 20=0.32%, 50=9.53%, 100=66.49%, 250=23.66% 00:22:05.413 cpu : usr=35.64%, sys=1.72%, ctx=1159, majf=0, minf=9 00:22:05.413 IO depths : 1=0.2%, 2=2.5%, 4=9.9%, 8=72.9%, 16=14.6%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=1847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83968: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=196, BW=787KiB/s (806kB/s)(7908KiB/10049msec) 00:22:05.413 slat (usec): min=3, max=8026, avg=28.55, stdev=299.47 00:22:05.413 clat (msec): min=18, max=157, avg=81.13, stdev=27.05 00:22:05.413 lat (msec): min=18, max=157, avg=81.15, stdev=27.06 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 50], 20.00th=[ 57], 00:22:05.413 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 85], 00:22:05.413 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 130], 95.00th=[ 132], 00:22:05.413 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 159], 00:22:05.413 | 99.99th=[ 159] 00:22:05.413 bw ( KiB/s): min= 560, max= 1168, per=4.18%, avg=784.40, stdev=181.85, samples=20 00:22:05.413 iops : min= 140, max= 292, avg=196.10, stdev=45.46, samples=20 00:22:05.413 lat (msec) : 20=0.10%, 50=11.18%, 100=70.46%, 250=18.26% 00:22:05.413 cpu : usr=33.98%, sys=1.77%, ctx=1208, majf=0, minf=9 00:22:05.413 IO depths : 1=0.2%, 2=0.8%, 4=2.6%, 8=80.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83969: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=188, BW=753KiB/s (771kB/s)(7556KiB/10041msec) 00:22:05.413 slat (usec): min=4, max=4030, avg=16.19, stdev=92.56 00:22:05.413 clat (msec): min=30, max=154, avg=84.83, stdev=25.50 00:22:05.413 lat (msec): min=30, max=154, avg=84.85, stdev=25.50 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:22:05.413 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 86], 00:22:05.413 | 70.00th=[ 91], 80.00th=[ 106], 90.00th=[ 128], 95.00th=[ 133], 00:22:05.413 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:22:05.413 | 99.99th=[ 155] 00:22:05.413 bw ( KiB/s): min= 560, max= 1024, per=4.01%, avg=751.85, stdev=161.02, samples=20 00:22:05.413 iops : min= 140, max= 256, avg=187.90, stdev=40.19, samples=20 00:22:05.413 lat (msec) : 50=9.00%, 100=67.87%, 250=23.13% 00:22:05.413 cpu : usr=40.25%, sys=1.71%, ctx=1301, majf=0, minf=9 00:22:05.413 IO depths : 1=0.2%, 2=2.2%, 4=8.2%, 8=75.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=89.0%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83970: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=202, BW=808KiB/s (828kB/s)(8116KiB/10040msec) 00:22:05.413 slat (usec): min=4, max=8027, avg=19.58, stdev=177.98 00:22:05.413 clat (msec): min=14, max=155, avg=79.00, stdev=27.70 00:22:05.413 lat (msec): min=14, max=155, avg=79.02, stdev=27.70 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 52], 00:22:05.413 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:22:05.413 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 124], 95.00th=[ 132], 00:22:05.413 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.413 | 99.99th=[ 157] 00:22:05.413 bw ( KiB/s): min= 560, max= 1448, per=4.29%, avg=805.30, stdev=225.62, samples=20 00:22:05.413 iops : min= 140, max= 362, avg=201.30, stdev=56.40, samples=20 00:22:05.413 lat (msec) : 20=0.15%, 50=18.63%, 100=64.22%, 250=17.00% 00:22:05.413 cpu : usr=31.60%, sys=1.54%, ctx=847, majf=0, minf=9 00:22:05.413 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.413 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.413 filename2: (groupid=0, jobs=1): err= 0: pid=83971: Wed Nov 20 13:41:15 2024 00:22:05.413 read: IOPS=195, BW=782KiB/s (801kB/s)(7852KiB/10036msec) 00:22:05.413 slat (usec): min=5, max=8041, avg=22.50, stdev=229.14 00:22:05.413 clat (msec): min=35, max=160, avg=81.58, stdev=25.84 00:22:05.413 lat (msec): min=35, max=160, avg=81.60, stdev=25.84 00:22:05.413 clat percentiles (msec): 00:22:05.413 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 58], 00:22:05.413 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:22:05.413 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 128], 95.00th=[ 131], 00:22:05.413 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:22:05.413 | 99.99th=[ 161] 00:22:05.413 bw ( KiB/s): min= 560, max= 1024, per=4.17%, avg=781.60, stdev=168.19, samples=20 00:22:05.413 iops : min= 140, max= 256, avg=195.40, stdev=42.05, samples=20 00:22:05.413 lat (msec) : 50=11.31%, 100=70.25%, 250=18.44% 00:22:05.413 cpu : usr=38.82%, sys=1.75%, ctx=1254, majf=0, minf=9 00:22:05.413 IO depths : 1=0.2%, 2=1.1%, 4=3.8%, 8=79.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:22:05.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.414 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.414 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.414 filename2: (groupid=0, jobs=1): err= 0: pid=83972: Wed Nov 20 13:41:15 2024 00:22:05.414 read: IOPS=200, BW=800KiB/s (820kB/s)(8028KiB/10030msec) 00:22:05.414 slat (usec): min=5, max=8025, avg=19.49, stdev=178.90 00:22:05.414 clat (msec): min=20, max=167, avg=79.77, stdev=27.39 00:22:05.414 lat (msec): min=20, max=167, avg=79.79, stdev=27.38 00:22:05.414 clat percentiles (msec): 00:22:05.414 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:22:05.414 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:22:05.414 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 129], 95.00th=[ 132], 00:22:05.414 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:22:05.414 | 99.99th=[ 169] 00:22:05.414 bw ( KiB/s): min= 560, max= 1168, per=4.26%, avg=798.90, stdev=194.95, samples=20 00:22:05.414 iops : min= 140, max= 292, avg=199.70, stdev=48.71, samples=20 00:22:05.414 lat (msec) : 50=17.34%, 100=64.77%, 250=17.89% 00:22:05.414 cpu : usr=32.34%, sys=1.49%, ctx=887, majf=0, minf=9 00:22:05.414 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:05.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.414 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.414 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:05.414 00:22:05.414 Run status group 0 (all jobs): 00:22:05.414 READ: bw=18.3MiB/s (19.2MB/s), 714KiB/s-832KiB/s (731kB/s-852kB/s), io=185MiB (194MB), run=10002-10096msec 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 bdev_null0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 [2024-11-20 13:41:15.590199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 bdev_null1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.414 { 00:22:05.414 "params": { 00:22:05.414 "name": "Nvme$subsystem", 00:22:05.414 "trtype": "$TEST_TRANSPORT", 00:22:05.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.414 "adrfam": "ipv4", 00:22:05.414 "trsvcid": "$NVMF_PORT", 00:22:05.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.414 "hdgst": ${hdgst:-false}, 00:22:05.414 "ddgst": ${ddgst:-false} 00:22:05.414 }, 00:22:05.414 "method": "bdev_nvme_attach_controller" 00:22:05.414 } 00:22:05.414 EOF 00:22:05.414 )") 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:05.414 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.415 { 00:22:05.415 "params": { 00:22:05.415 "name": "Nvme$subsystem", 00:22:05.415 "trtype": "$TEST_TRANSPORT", 00:22:05.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.415 "adrfam": "ipv4", 00:22:05.415 "trsvcid": "$NVMF_PORT", 00:22:05.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.415 "hdgst": ${hdgst:-false}, 00:22:05.415 "ddgst": ${ddgst:-false} 00:22:05.415 }, 00:22:05.415 "method": "bdev_nvme_attach_controller" 00:22:05.415 } 00:22:05.415 EOF 00:22:05.415 )") 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:05.415 "params": { 00:22:05.415 "name": "Nvme0", 00:22:05.415 "trtype": "tcp", 00:22:05.415 "traddr": "10.0.0.3", 00:22:05.415 "adrfam": "ipv4", 00:22:05.415 "trsvcid": "4420", 00:22:05.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:05.415 "hdgst": false, 00:22:05.415 "ddgst": false 00:22:05.415 }, 00:22:05.415 "method": "bdev_nvme_attach_controller" 00:22:05.415 },{ 00:22:05.415 "params": { 00:22:05.415 "name": "Nvme1", 00:22:05.415 "trtype": "tcp", 00:22:05.415 "traddr": "10.0.0.3", 00:22:05.415 "adrfam": "ipv4", 00:22:05.415 "trsvcid": "4420", 00:22:05.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.415 "hdgst": false, 00:22:05.415 "ddgst": false 00:22:05.415 }, 00:22:05.415 "method": "bdev_nvme_attach_controller" 00:22:05.415 }' 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:05.415 13:41:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:05.415 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:05.415 ... 00:22:05.415 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:05.415 ... 00:22:05.415 fio-3.35 00:22:05.415 Starting 4 threads 00:22:09.603 00:22:09.603 filename0: (groupid=0, jobs=1): err= 0: pid=84125: Wed Nov 20 13:41:21 2024 00:22:09.603 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5003msec) 00:22:09.603 slat (nsec): min=3840, max=52096, avg=14889.33, stdev=3391.26 00:22:09.603 clat (usec): min=1092, max=8923, avg=4164.97, stdev=568.25 00:22:09.603 lat (usec): min=1101, max=8938, avg=4179.86, stdev=568.22 00:22:09.603 clat percentiles (usec): 00:22:09.603 | 1.00th=[ 2278], 5.00th=[ 3392], 10.00th=[ 3851], 20.00th=[ 3949], 00:22:09.603 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4146], 00:22:09.603 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5145], 00:22:09.603 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 6915], 00:22:09.603 | 99.99th=[ 8979] 00:22:09.603 bw ( KiB/s): min=14208, max=16416, per=24.23%, avg=15240.89, stdev=779.67, samples=9 00:22:09.603 iops : min= 1776, max= 2052, avg=1905.11, stdev=97.46, samples=9 00:22:09.603 lat (msec) : 2=0.53%, 4=31.08%, 10=68.39% 00:22:09.603 cpu : usr=90.74%, sys=8.46%, ctx=16, majf=0, minf=9 00:22:09.603 IO depths : 1=0.1%, 2=20.9%, 4=53.0%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 issued rwts: total=9485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.603 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:09.603 filename0: (groupid=0, jobs=1): err= 0: pid=84126: Wed Nov 20 13:41:21 2024 00:22:09.603 read: IOPS=2115, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5001msec) 00:22:09.603 slat (nsec): min=5751, max=66144, avg=12572.19, stdev=3920.44 00:22:09.603 clat (usec): min=629, max=7712, avg=3740.44, stdev=892.55 00:22:09.603 lat (usec): min=637, max=7726, avg=3753.02, stdev=893.35 00:22:09.603 clat percentiles (usec): 00:22:09.603 | 1.00th=[ 1221], 5.00th=[ 1483], 10.00th=[ 2278], 20.00th=[ 3392], 00:22:09.603 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:22:09.603 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 4752], 00:22:09.603 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6521], 99.95th=[ 6980], 00:22:09.603 | 99.99th=[ 7439] 00:22:09.603 bw ( KiB/s): min=15408, max=18368, per=26.37%, avg=16586.67, stdev=1253.76, samples=9 00:22:09.603 iops : min= 1926, max= 2296, avg=2073.33, stdev=156.72, samples=9 00:22:09.603 lat (usec) : 750=0.37%, 1000=0.09% 00:22:09.603 lat (msec) : 2=8.22%, 4=40.23%, 10=51.08% 00:22:09.603 cpu : usr=91.36%, sys=7.74%, ctx=9, majf=0, minf=0 00:22:09.603 IO depths : 1=0.1%, 2=13.0%, 4=57.8%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 issued rwts: total=10579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.603 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:09.603 filename1: (groupid=0, jobs=1): err= 0: pid=84127: Wed Nov 20 13:41:21 2024 00:22:09.603 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5001msec) 00:22:09.603 slat (nsec): min=7740, max=51544, avg=14886.99, stdev=4037.62 00:22:09.603 clat (usec): min=1003, max=8926, avg=3954.19, stdev=670.27 00:22:09.603 lat (usec): min=1012, max=8939, avg=3969.07, stdev=670.32 00:22:09.603 clat percentiles (usec): 00:22:09.603 | 1.00th=[ 1827], 5.00th=[ 2343], 10.00th=[ 3097], 20.00th=[ 3916], 00:22:09.603 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4080], 00:22:09.603 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4752], 00:22:09.603 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6456], 99.95th=[ 6915], 00:22:09.603 | 99.99th=[ 8979] 00:22:09.603 bw ( KiB/s): min=14800, max=17424, per=25.65%, avg=16136.78, stdev=844.82, samples=9 00:22:09.603 iops : min= 1850, max= 2178, avg=2017.00, stdev=105.56, samples=9 00:22:09.603 lat (msec) : 2=2.15%, 4=39.32%, 10=58.52% 00:22:09.603 cpu : usr=92.48%, sys=6.64%, ctx=12, majf=0, minf=0 00:22:09.603 IO depths : 1=0.1%, 2=17.3%, 4=55.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 issued rwts: total=9984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.603 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:09.603 filename1: (groupid=0, jobs=1): err= 0: pid=84128: Wed Nov 20 13:41:21 2024 00:22:09.603 read: IOPS=1857, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 00:22:09.603 slat (usec): min=4, max=623, avg=15.27, stdev= 7.24 00:22:09.603 clat (usec): min=1010, max=8931, avg=4248.60, stdev=560.57 00:22:09.603 lat (usec): min=1019, max=8952, avg=4263.87, stdev=560.42 00:22:09.603 clat percentiles (usec): 00:22:09.603 | 1.00th=[ 2868], 5.00th=[ 3556], 10.00th=[ 3916], 20.00th=[ 3982], 00:22:09.603 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4178], 00:22:09.603 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5342], 00:22:09.603 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 8160], 99.95th=[ 8356], 00:22:09.603 | 99.99th=[ 8979] 00:22:09.603 bw ( KiB/s): min=13040, max=15872, per=23.68%, avg=14897.44, stdev=905.57, samples=9 00:22:09.603 iops : min= 1630, max= 1984, avg=1862.11, stdev=113.20, samples=9 00:22:09.603 lat (msec) : 2=0.44%, 4=28.85%, 10=70.71% 00:22:09.603 cpu : usr=91.84%, sys=7.36%, ctx=35, majf=0, minf=9 00:22:09.603 IO depths : 1=0.1%, 2=22.5%, 4=52.1%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.603 issued rwts: total=9290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.603 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:09.603 00:22:09.603 Run status group 0 (all jobs): 00:22:09.603 READ: bw=61.4MiB/s (64.4MB/s), 14.5MiB/s-16.5MiB/s (15.2MB/s-17.3MB/s), io=307MiB (322MB), run=5001-5003msec 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 ************************************ 00:22:09.862 END TEST fio_dif_rand_params 00:22:09.862 ************************************ 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.862 00:22:09.862 real 0m23.758s 00:22:09.862 user 2m4.043s 00:22:09.862 sys 0m8.119s 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 13:41:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:09.862 13:41:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.862 13:41:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.862 13:41:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:10.121 ************************************ 00:22:10.121 START TEST fio_dif_digest 00:22:10.121 ************************************ 00:22:10.121 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:22:10.121 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:10.121 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:10.121 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:10.122 bdev_null0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:10.122 [2024-11-20 13:41:21.853647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.122 { 00:22:10.122 "params": { 00:22:10.122 "name": "Nvme$subsystem", 00:22:10.122 "trtype": "$TEST_TRANSPORT", 00:22:10.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.122 "adrfam": "ipv4", 00:22:10.122 "trsvcid": "$NVMF_PORT", 00:22:10.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.122 "hdgst": ${hdgst:-false}, 00:22:10.122 "ddgst": ${ddgst:-false} 00:22:10.122 }, 00:22:10.122 "method": "bdev_nvme_attach_controller" 00:22:10.122 } 00:22:10.122 EOF 00:22:10.122 )") 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.122 "params": { 00:22:10.122 "name": "Nvme0", 00:22:10.122 "trtype": "tcp", 00:22:10.122 "traddr": "10.0.0.3", 00:22:10.122 "adrfam": "ipv4", 00:22:10.122 "trsvcid": "4420", 00:22:10.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.122 "hdgst": true, 00:22:10.122 "ddgst": true 00:22:10.122 }, 00:22:10.122 "method": "bdev_nvme_attach_controller" 00:22:10.122 }' 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:10.122 13:41:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:10.381 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:10.381 ... 00:22:10.381 fio-3.35 00:22:10.381 Starting 3 threads 00:22:22.585 00:22:22.585 filename0: (groupid=0, jobs=1): err= 0: pid=84243: Wed Nov 20 13:41:32 2024 00:22:22.585 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10002msec) 00:22:22.585 slat (nsec): min=8442, max=47169, avg=15180.73, stdev=2721.98 00:22:22.585 clat (usec): min=11264, max=14978, avg=13611.69, stdev=198.26 00:22:22.585 lat (usec): min=11278, max=15000, avg=13626.87, stdev=198.36 00:22:22.585 clat percentiles (usec): 00:22:22.585 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13566], 00:22:22.585 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13566], 00:22:22.585 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:22:22.585 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:22:22.585 | 99.99th=[15008] 00:22:22.585 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28133.05, stdev=380.62, samples=19 00:22:22.585 iops : min= 216, max= 222, avg=219.79, stdev= 2.97, samples=19 00:22:22.585 lat (msec) : 20=100.00% 00:22:22.585 cpu : usr=91.22%, sys=8.23%, ctx=7, majf=0, minf=0 00:22:22.585 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.585 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:22.585 filename0: (groupid=0, jobs=1): err= 0: pid=84244: Wed Nov 20 13:41:32 2024 00:22:22.585 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10002msec) 00:22:22.585 slat (nsec): min=8354, max=43959, avg=15287.41, stdev=2771.70 00:22:22.585 clat (usec): min=11254, max=14980, avg=13611.33, stdev=198.34 00:22:22.585 lat (usec): min=11268, max=15006, avg=13626.62, stdev=198.53 00:22:22.585 clat percentiles (usec): 00:22:22.585 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13566], 00:22:22.585 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13566], 00:22:22.585 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:22:22.585 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:22:22.585 | 99.99th=[15008] 00:22:22.585 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28133.05, stdev=380.62, samples=19 00:22:22.585 iops : min= 216, max= 222, avg=219.79, stdev= 2.97, samples=19 00:22:22.585 lat (msec) : 20=100.00% 00:22:22.585 cpu : usr=91.26%, sys=8.20%, ctx=21, majf=0, minf=0 00:22:22.585 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.585 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.585 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:22.586 filename0: (groupid=0, jobs=1): err= 0: pid=84245: Wed Nov 20 13:41:32 2024 00:22:22.586 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10006msec) 00:22:22.586 slat (nsec): min=5393, max=55065, avg=11217.42, stdev=4163.09 00:22:22.586 clat (usec): min=13369, max=16651, avg=13622.89, stdev=205.61 00:22:22.586 lat (usec): min=13377, max=16688, avg=13634.10, stdev=205.96 00:22:22.586 clat percentiles (usec): 00:22:22.586 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13435], 20.00th=[13566], 00:22:22.586 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13566], 00:22:22.586 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:22:22.586 | 99.00th=[14353], 99.50th=[14615], 99.90th=[16581], 99.95th=[16581], 00:22:22.586 | 99.99th=[16712] 00:22:22.586 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28133.05, stdev=380.62, samples=19 00:22:22.586 iops : min= 216, max= 222, avg=219.79, stdev= 2.97, samples=19 00:22:22.586 lat (msec) : 20=100.00% 00:22:22.586 cpu : usr=90.85%, sys=8.56%, ctx=16, majf=0, minf=0 00:22:22.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:22.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.586 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:22.586 00:22:22.586 Run status group 0 (all jobs): 00:22:22.586 READ: bw=82.4MiB/s (86.4MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=825MiB (865MB), run=10002-10006msec 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.586 00:22:22.586 real 0m11.096s 00:22:22.586 user 0m28.042s 00:22:22.586 sys 0m2.804s 00:22:22.586 ************************************ 00:22:22.586 END TEST fio_dif_digest 00:22:22.586 ************************************ 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.586 13:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:22.586 13:41:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:22.586 13:41:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:22.586 13:41:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.586 13:41:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.586 rmmod nvme_tcp 00:22:22.586 rmmod nvme_fabrics 00:22:22.586 rmmod nvme_keyring 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83476 ']' 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83476 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83476 ']' 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83476 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83476 00:22:22.586 killing process with pid 83476 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83476' 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83476 00:22:22.586 13:41:33 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83476 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:22.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:22.586 Waiting for block devices as requested 00:22:22.586 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:22.586 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:22.586 13:41:33 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.586 13:41:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:22.586 13:41:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.586 13:41:34 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:22.586 00:22:22.586 real 1m0.146s 00:22:22.586 user 3m48.271s 00:22:22.586 sys 0m19.697s 00:22:22.586 13:41:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.586 13:41:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:22.586 ************************************ 00:22:22.586 END TEST nvmf_dif 00:22:22.586 ************************************ 00:22:22.586 13:41:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:22.586 13:41:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:22.586 13:41:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.586 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:22:22.587 ************************************ 00:22:22.587 START TEST nvmf_abort_qd_sizes 00:22:22.587 ************************************ 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:22.587 * Looking for test storage... 00:22:22.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:22.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.587 --rc genhtml_branch_coverage=1 00:22:22.587 --rc genhtml_function_coverage=1 00:22:22.587 --rc genhtml_legend=1 00:22:22.587 --rc geninfo_all_blocks=1 00:22:22.587 --rc geninfo_unexecuted_blocks=1 00:22:22.587 00:22:22.587 ' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:22.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.587 --rc genhtml_branch_coverage=1 00:22:22.587 --rc genhtml_function_coverage=1 00:22:22.587 --rc genhtml_legend=1 00:22:22.587 --rc geninfo_all_blocks=1 00:22:22.587 --rc geninfo_unexecuted_blocks=1 00:22:22.587 00:22:22.587 ' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:22.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.587 --rc genhtml_branch_coverage=1 00:22:22.587 --rc genhtml_function_coverage=1 00:22:22.587 --rc genhtml_legend=1 00:22:22.587 --rc geninfo_all_blocks=1 00:22:22.587 --rc geninfo_unexecuted_blocks=1 00:22:22.587 00:22:22.587 ' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:22.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.587 --rc genhtml_branch_coverage=1 00:22:22.587 --rc genhtml_function_coverage=1 00:22:22.587 --rc genhtml_legend=1 00:22:22.587 --rc geninfo_all_blocks=1 00:22:22.587 --rc geninfo_unexecuted_blocks=1 00:22:22.587 00:22:22.587 ' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.587 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:22.588 Cannot find device "nvmf_init_br" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:22.588 Cannot find device "nvmf_init_br2" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:22.588 Cannot find device "nvmf_tgt_br" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.588 Cannot find device "nvmf_tgt_br2" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:22.588 Cannot find device "nvmf_init_br" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:22.588 Cannot find device "nvmf_init_br2" 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:22.588 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:22.846 Cannot find device "nvmf_tgt_br" 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:22.846 Cannot find device "nvmf_tgt_br2" 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:22.846 Cannot find device "nvmf_br" 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:22.846 Cannot find device "nvmf_init_if" 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:22.846 Cannot find device "nvmf_init_if2" 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:22.846 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:23.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:23.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:22:23.105 00:22:23.105 --- 10.0.0.3 ping statistics --- 00:22:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.105 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:23.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:23.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:22:23.105 00:22:23.105 --- 10.0.0.4 ping statistics --- 00:22:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.105 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:23.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:23.105 00:22:23.105 --- 10.0.0.1 ping statistics --- 00:22:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.105 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:23.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:23.105 00:22:23.105 --- 10.0.0.2 ping statistics --- 00:22:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.105 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:23.105 13:41:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:23.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:23.671 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:23.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84888 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84888 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84888 ']' 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.930 13:41:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:23.930 [2024-11-20 13:41:35.801944] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:23.930 [2024-11-20 13:41:35.802059] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.188 [2024-11-20 13:41:35.964304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.188 [2024-11-20 13:41:36.057840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.188 [2024-11-20 13:41:36.057913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.188 [2024-11-20 13:41:36.057938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.188 [2024-11-20 13:41:36.057949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.189 [2024-11-20 13:41:36.057958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.189 [2024-11-20 13:41:36.059287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.189 [2024-11-20 13:41:36.059390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.189 [2024-11-20 13:41:36.059427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.189 [2024-11-20 13:41:36.059431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.189 [2024-11-20 13:41:36.119304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.123 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.124 13:41:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 ************************************ 00:22:25.124 START TEST spdk_target_abort 00:22:25.124 ************************************ 00:22:25.124 13:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:22:25.124 13:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:25.124 13:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:25.124 13:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.124 13:41:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 spdk_targetn1 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 [2024-11-20 13:41:37.018822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.124 [2024-11-20 13:41:37.059640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:25.124 13:41:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:28.495 Initializing NVMe Controllers 00:22:28.495 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:28.495 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:28.495 Initialization complete. Launching workers. 00:22:28.495 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10645, failed: 0 00:22:28.495 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1016, failed to submit 9629 00:22:28.495 success 859, unsuccessful 157, failed 0 00:22:28.495 13:41:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:28.495 13:41:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:31.805 Initializing NVMe Controllers 00:22:31.805 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:31.805 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:31.805 Initialization complete. Launching workers. 00:22:31.805 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8918, failed: 0 00:22:31.805 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1151, failed to submit 7767 00:22:31.805 success 425, unsuccessful 726, failed 0 00:22:31.805 13:41:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:31.805 13:41:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:35.124 Initializing NVMe Controllers 00:22:35.124 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:35.124 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:35.124 Initialization complete. Launching workers. 00:22:35.124 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30768, failed: 0 00:22:35.124 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2217, failed to submit 28551 00:22:35.124 success 490, unsuccessful 1727, failed 0 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.124 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84888 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84888 ']' 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84888 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84888 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.692 killing process with pid 84888 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84888' 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84888 00:22:35.692 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84888 00:22:35.950 00:22:35.950 real 0m10.797s 00:22:35.950 user 0m43.772s 00:22:35.950 sys 0m2.189s 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.950 ************************************ 00:22:35.950 END TEST spdk_target_abort 00:22:35.950 ************************************ 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.950 13:41:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:35.950 13:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.950 13:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.950 13:41:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:35.950 ************************************ 00:22:35.950 START TEST kernel_target_abort 00:22:35.950 ************************************ 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:35.950 13:41:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:36.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.467 Waiting for block devices as requested 00:22:36.467 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:36.467 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:36.726 No valid GPT data, bailing 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:36.726 No valid GPT data, bailing 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:36.726 No valid GPT data, bailing 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:36.726 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:36.986 No valid GPT data, bailing 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 --hostid=8ff08136-65da-4f4c-b769-a07096c587b5 -a 10.0.0.1 -t tcp -s 4420 00:22:36.986 00:22:36.986 Discovery Log Number of Records 2, Generation counter 2 00:22:36.986 =====Discovery Log Entry 0====== 00:22:36.986 trtype: tcp 00:22:36.986 adrfam: ipv4 00:22:36.986 subtype: current discovery subsystem 00:22:36.986 treq: not specified, sq flow control disable supported 00:22:36.986 portid: 1 00:22:36.986 trsvcid: 4420 00:22:36.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:36.986 traddr: 10.0.0.1 00:22:36.986 eflags: none 00:22:36.986 sectype: none 00:22:36.986 =====Discovery Log Entry 1====== 00:22:36.986 trtype: tcp 00:22:36.986 adrfam: ipv4 00:22:36.986 subtype: nvme subsystem 00:22:36.986 treq: not specified, sq flow control disable supported 00:22:36.986 portid: 1 00:22:36.986 trsvcid: 4420 00:22:36.986 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:36.986 traddr: 10.0.0.1 00:22:36.986 eflags: none 00:22:36.986 sectype: none 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:36.986 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:36.987 13:41:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:40.274 Initializing NVMe Controllers 00:22:40.274 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:40.274 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:40.274 Initialization complete. Launching workers. 00:22:40.274 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31634, failed: 0 00:22:40.274 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31634, failed to submit 0 00:22:40.274 success 0, unsuccessful 31634, failed 0 00:22:40.274 13:41:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:40.274 13:41:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:43.557 Initializing NVMe Controllers 00:22:43.557 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:43.557 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:43.557 Initialization complete. Launching workers. 00:22:43.557 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64058, failed: 0 00:22:43.557 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27499, failed to submit 36559 00:22:43.557 success 0, unsuccessful 27499, failed 0 00:22:43.557 13:41:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:43.557 13:41:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.839 Initializing NVMe Controllers 00:22:46.839 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:46.839 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:46.839 Initialization complete. Launching workers. 00:22:46.839 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71162, failed: 0 00:22:46.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17774, failed to submit 53388 00:22:46.839 success 0, unsuccessful 17774, failed 0 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:46.839 13:41:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:47.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:48.785 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:48.785 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:48.785 00:22:48.785 real 0m12.892s 00:22:48.785 user 0m6.303s 00:22:48.785 sys 0m3.924s 00:22:48.785 13:42:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.785 13:42:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:48.785 ************************************ 00:22:48.785 END TEST kernel_target_abort 00:22:48.785 ************************************ 00:22:48.785 13:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:48.785 13:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:48.785 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.785 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.092 rmmod nvme_tcp 00:22:49.092 rmmod nvme_fabrics 00:22:49.092 rmmod nvme_keyring 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84888 ']' 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84888 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84888 ']' 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84888 00:22:49.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84888) - No such process 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84888 is not found' 00:22:49.092 Process with pid 84888 is not found 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:49.092 13:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:49.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.385 Waiting for block devices as requested 00:22:49.385 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.644 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:49.644 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:49.903 00:22:49.903 real 0m27.531s 00:22:49.903 user 0m51.485s 00:22:49.903 sys 0m7.608s 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.903 13:42:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:49.903 ************************************ 00:22:49.903 END TEST nvmf_abort_qd_sizes 00:22:49.903 ************************************ 00:22:49.903 13:42:01 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:49.903 13:42:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.903 13:42:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.903 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:49.903 ************************************ 00:22:49.903 START TEST keyring_file 00:22:49.903 ************************************ 00:22:49.903 13:42:01 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:50.162 * Looking for test storage... 00:22:50.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:50.162 13:42:01 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.162 13:42:01 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.162 13:42:01 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.162 13:42:01 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.162 13:42:01 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.163 13:42:01 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:50.163 13:42:01 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:50.163 13:42:01 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.163 13:42:01 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:50.163 13:42:02 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.163 13:42:02 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.163 --rc genhtml_branch_coverage=1 00:22:50.163 --rc genhtml_function_coverage=1 00:22:50.163 --rc genhtml_legend=1 00:22:50.163 --rc geninfo_all_blocks=1 00:22:50.163 --rc geninfo_unexecuted_blocks=1 00:22:50.163 00:22:50.163 ' 00:22:50.163 13:42:02 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.163 --rc genhtml_branch_coverage=1 00:22:50.163 --rc genhtml_function_coverage=1 00:22:50.163 --rc genhtml_legend=1 00:22:50.163 --rc geninfo_all_blocks=1 00:22:50.163 --rc geninfo_unexecuted_blocks=1 00:22:50.163 00:22:50.163 ' 00:22:50.163 13:42:02 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.163 --rc genhtml_branch_coverage=1 00:22:50.163 --rc genhtml_function_coverage=1 00:22:50.163 --rc genhtml_legend=1 00:22:50.163 --rc geninfo_all_blocks=1 00:22:50.163 --rc geninfo_unexecuted_blocks=1 00:22:50.163 00:22:50.163 ' 00:22:50.163 13:42:02 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.163 --rc genhtml_branch_coverage=1 00:22:50.163 --rc genhtml_function_coverage=1 00:22:50.163 --rc genhtml_legend=1 00:22:50.163 --rc geninfo_all_blocks=1 00:22:50.163 --rc geninfo_unexecuted_blocks=1 00:22:50.163 00:22:50.163 ' 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.163 13:42:02 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.163 13:42:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.163 13:42:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.163 13:42:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.163 13:42:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:50.163 13:42:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.163 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:50.163 13:42:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OHaasIQdrn 00:22:50.163 13:42:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:50.163 13:42:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:50.164 13:42:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:50.164 13:42:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:50.164 13:42:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OHaasIQdrn 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OHaasIQdrn 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.OHaasIQdrn 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0kiXn2EaZ5 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:50.422 13:42:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0kiXn2EaZ5 00:22:50.422 13:42:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0kiXn2EaZ5 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0kiXn2EaZ5 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=85821 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.422 13:42:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85821 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85821 ']' 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.422 13:42:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 [2024-11-20 13:42:02.265850] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:50.422 [2024-11-20 13:42:02.265992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85821 ] 00:22:50.681 [2024-11-20 13:42:02.414907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.681 [2024-11-20 13:42:02.483358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.681 [2024-11-20 13:42:02.554674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:50.941 13:42:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:50.941 [2024-11-20 13:42:02.787327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.941 null0 00:22:50.941 [2024-11-20 13:42:02.819338] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.941 [2024-11-20 13:42:02.819562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.941 13:42:02 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:50.941 [2024-11-20 13:42:02.847300] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:50.941 request: 00:22:50.941 { 00:22:50.941 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.941 "secure_channel": false, 00:22:50.941 "listen_address": { 00:22:50.941 "trtype": "tcp", 00:22:50.941 "traddr": "127.0.0.1", 00:22:50.941 "trsvcid": "4420" 00:22:50.941 }, 00:22:50.941 "method": "nvmf_subsystem_add_listener", 00:22:50.941 "req_id": 1 00:22:50.941 } 00:22:50.941 Got JSON-RPC error response 00:22:50.941 response: 00:22:50.941 { 00:22:50.941 "code": -32602, 00:22:50.941 "message": "Invalid parameters" 00:22:50.941 } 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.941 13:42:02 keyring_file -- keyring/file.sh@47 -- # bperfpid=85831 00:22:50.941 13:42:02 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:50.941 13:42:02 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85831 /var/tmp/bperf.sock 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85831 ']' 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.941 13:42:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:51.200 [2024-11-20 13:42:02.931017] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:51.200 [2024-11-20 13:42:02.931143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85831 ] 00:22:51.200 [2024-11-20 13:42:03.086375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.458 [2024-11-20 13:42:03.160643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.458 [2024-11-20 13:42:03.220773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:51.458 13:42:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.458 13:42:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:51.458 13:42:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:51.458 13:42:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:51.716 13:42:03 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0kiXn2EaZ5 00:22:51.717 13:42:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0kiXn2EaZ5 00:22:52.284 13:42:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:52.284 13:42:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:52.284 13:42:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:52.284 13:42:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:52.284 13:42:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:52.284 13:42:04 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OHaasIQdrn == \/\t\m\p\/\t\m\p\.\O\H\a\a\s\I\Q\d\r\n ]] 00:22:52.284 13:42:04 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:52.284 13:42:04 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:52.284 13:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:52.284 13:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:52.284 13:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:52.853 13:42:04 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.0kiXn2EaZ5 == \/\t\m\p\/\t\m\p\.\0\k\i\X\n\2\E\a\Z\5 ]] 00:22:52.853 13:42:04 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:52.853 13:42:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:52.853 13:42:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:52.853 13:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:52.853 13:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:52.853 13:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:53.112 13:42:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:53.112 13:42:04 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:53.112 13:42:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:53.112 13:42:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:53.112 13:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:53.112 13:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:53.112 13:42:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:53.371 13:42:05 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:53.371 13:42:05 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:53.371 13:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:53.630 [2024-11-20 13:42:05.457224] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.630 nvme0n1 00:22:53.630 13:42:05 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:53.630 13:42:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:53.630 13:42:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:53.630 13:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:53.630 13:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:53.630 13:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:54.198 13:42:05 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:54.198 13:42:05 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:54.198 13:42:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:54.198 13:42:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:54.198 13:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:54.198 13:42:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:54.198 13:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:54.198 13:42:06 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:54.198 13:42:06 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.457 Running I/O for 1 seconds... 00:22:55.395 11211.00 IOPS, 43.79 MiB/s 00:22:55.395 Latency(us) 00:22:55.395 [2024-11-20T13:42:07.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.395 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:55.395 nvme0n1 : 1.01 11265.36 44.01 0.00 0.00 11328.48 4796.04 17039.36 00:22:55.395 [2024-11-20T13:42:07.352Z] =================================================================================================================== 00:22:55.395 [2024-11-20T13:42:07.352Z] Total : 11265.36 44.01 0.00 0.00 11328.48 4796.04 17039.36 00:22:55.395 { 00:22:55.395 "results": [ 00:22:55.395 { 00:22:55.395 "job": "nvme0n1", 00:22:55.395 "core_mask": "0x2", 00:22:55.395 "workload": "randrw", 00:22:55.395 "percentage": 50, 00:22:55.395 "status": "finished", 00:22:55.395 "queue_depth": 128, 00:22:55.395 "io_size": 4096, 00:22:55.395 "runtime": 1.006626, 00:22:55.395 "iops": 11265.355752782067, 00:22:55.395 "mibps": 44.00529590930495, 00:22:55.395 "io_failed": 0, 00:22:55.395 "io_timeout": 0, 00:22:55.395 "avg_latency_us": 11328.478671156003, 00:22:55.395 "min_latency_us": 4796.043636363636, 00:22:55.395 "max_latency_us": 17039.36 00:22:55.395 } 00:22:55.395 ], 00:22:55.395 "core_count": 1 00:22:55.395 } 00:22:55.395 13:42:07 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:55.395 13:42:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:55.653 13:42:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:55.653 13:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:55.653 13:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:55.653 13:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:55.653 13:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:55.653 13:42:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:55.912 13:42:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:56.170 13:42:07 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:56.170 13:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:56.170 13:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:56.170 13:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:56.170 13:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:56.170 13:42:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:56.429 13:42:08 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:56.429 13:42:08 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.429 13:42:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:56.429 13:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:56.688 [2024-11-20 13:42:08.514948] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:56.688 [2024-11-20 13:42:08.515365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7ac60 (107): Transport endpoint is not connected 00:22:56.688 [2024-11-20 13:42:08.516341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7ac60 (9): Bad file descriptor 00:22:56.688 [2024-11-20 13:42:08.517354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:56.688 [2024-11-20 13:42:08.517383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:56.688 [2024-11-20 13:42:08.517394] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:56.688 [2024-11-20 13:42:08.517405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:56.688 request: 00:22:56.688 { 00:22:56.688 "name": "nvme0", 00:22:56.688 "trtype": "tcp", 00:22:56.688 "traddr": "127.0.0.1", 00:22:56.688 "adrfam": "ipv4", 00:22:56.689 "trsvcid": "4420", 00:22:56.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:56.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:56.689 "prchk_reftag": false, 00:22:56.689 "prchk_guard": false, 00:22:56.689 "hdgst": false, 00:22:56.689 "ddgst": false, 00:22:56.689 "psk": "key1", 00:22:56.689 "allow_unrecognized_csi": false, 00:22:56.689 "method": "bdev_nvme_attach_controller", 00:22:56.689 "req_id": 1 00:22:56.689 } 00:22:56.689 Got JSON-RPC error response 00:22:56.689 response: 00:22:56.689 { 00:22:56.689 "code": -5, 00:22:56.689 "message": "Input/output error" 00:22:56.689 } 00:22:56.689 13:42:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:56.689 13:42:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.689 13:42:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.689 13:42:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.689 13:42:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:56.689 13:42:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:56.689 13:42:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:56.689 13:42:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:56.689 13:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:56.689 13:42:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:56.947 13:42:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:56.947 13:42:08 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:56.947 13:42:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:56.947 13:42:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:56.947 13:42:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:56.947 13:42:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:56.947 13:42:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.207 13:42:09 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:57.207 13:42:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:57.207 13:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:57.464 13:42:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:57.464 13:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:58.032 13:42:09 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:58.032 13:42:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.032 13:42:09 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:58.298 13:42:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:58.299 13:42:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.OHaasIQdrn 00:22:58.299 13:42:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.299 13:42:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.299 13:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.299 [2024-11-20 13:42:10.249712] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OHaasIQdrn': 0100660 00:22:58.299 [2024-11-20 13:42:10.249801] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:58.299 request: 00:22:58.299 { 00:22:58.299 "name": "key0", 00:22:58.299 "path": "/tmp/tmp.OHaasIQdrn", 00:22:58.299 "method": "keyring_file_add_key", 00:22:58.299 "req_id": 1 00:22:58.299 } 00:22:58.299 Got JSON-RPC error response 00:22:58.299 response: 00:22:58.299 { 00:22:58.299 "code": -1, 00:22:58.299 "message": "Operation not permitted" 00:22:58.299 } 00:22:58.566 13:42:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:58.566 13:42:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.566 13:42:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.566 13:42:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.566 13:42:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.OHaasIQdrn 00:22:58.566 13:42:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.566 13:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OHaasIQdrn 00:22:58.824 13:42:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.OHaasIQdrn 00:22:58.824 13:42:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:58.824 13:42:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:58.824 13:42:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:58.824 13:42:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:58.824 13:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.824 13:42:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:59.083 13:42:10 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:59.083 13:42:10 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.083 13:42:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.083 13:42:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.342 [2024-11-20 13:42:11.146025] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.OHaasIQdrn': No such file or directory 00:22:59.342 [2024-11-20 13:42:11.146440] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:59.342 [2024-11-20 13:42:11.146468] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:59.342 [2024-11-20 13:42:11.146480] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:59.342 [2024-11-20 13:42:11.146490] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:59.342 [2024-11-20 13:42:11.146500] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:59.342 request: 00:22:59.342 { 00:22:59.342 "name": "nvme0", 00:22:59.342 "trtype": "tcp", 00:22:59.342 "traddr": "127.0.0.1", 00:22:59.342 "adrfam": "ipv4", 00:22:59.342 "trsvcid": "4420", 00:22:59.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:59.342 "prchk_reftag": false, 00:22:59.342 "prchk_guard": false, 00:22:59.342 "hdgst": false, 00:22:59.343 "ddgst": false, 00:22:59.343 "psk": "key0", 00:22:59.343 "allow_unrecognized_csi": false, 00:22:59.343 "method": "bdev_nvme_attach_controller", 00:22:59.343 "req_id": 1 00:22:59.343 } 00:22:59.343 Got JSON-RPC error response 00:22:59.343 response: 00:22:59.343 { 00:22:59.343 "code": -19, 00:22:59.343 "message": "No such device" 00:22:59.343 } 00:22:59.343 13:42:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:59.343 13:42:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.343 13:42:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.343 13:42:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.343 13:42:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:59.343 13:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:59.620 13:42:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nyAVuA9egZ 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:59.620 13:42:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nyAVuA9egZ 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nyAVuA9egZ 00:22:59.620 13:42:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nyAVuA9egZ 00:22:59.620 13:42:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nyAVuA9egZ 00:22:59.620 13:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nyAVuA9egZ 00:22:59.879 13:42:11 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.879 13:42:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:00.446 nvme0n1 00:23:00.446 13:42:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:23:00.446 13:42:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:00.446 13:42:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:00.446 13:42:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:00.446 13:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:00.446 13:42:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:00.705 13:42:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:23:00.705 13:42:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:23:00.705 13:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:00.965 13:42:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:23:00.965 13:42:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:23:00.965 13:42:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:00.965 13:42:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:00.965 13:42:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:01.224 13:42:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:23:01.224 13:42:13 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:23:01.224 13:42:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:01.224 13:42:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:01.224 13:42:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:01.224 13:42:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:01.224 13:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.482 13:42:13 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:23:01.482 13:42:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:01.482 13:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:01.740 13:42:13 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:23:01.740 13:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.740 13:42:13 keyring_file -- keyring/file.sh@105 -- # jq length 00:23:01.998 13:42:13 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:23:01.998 13:42:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nyAVuA9egZ 00:23:01.998 13:42:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nyAVuA9egZ 00:23:02.256 13:42:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0kiXn2EaZ5 00:23:02.256 13:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0kiXn2EaZ5 00:23:02.513 13:42:14 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:02.513 13:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:02.772 nvme0n1 00:23:03.030 13:42:14 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:23:03.030 13:42:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:03.295 13:42:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:23:03.295 "subsystems": [ 00:23:03.295 { 00:23:03.295 "subsystem": "keyring", 00:23:03.295 "config": [ 00:23:03.295 { 00:23:03.295 "method": "keyring_file_add_key", 00:23:03.295 "params": { 00:23:03.295 "name": "key0", 00:23:03.295 "path": "/tmp/tmp.nyAVuA9egZ" 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "keyring_file_add_key", 00:23:03.295 "params": { 00:23:03.295 "name": "key1", 00:23:03.295 "path": "/tmp/tmp.0kiXn2EaZ5" 00:23:03.295 } 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "iobuf", 00:23:03.295 "config": [ 00:23:03.295 { 00:23:03.295 "method": "iobuf_set_options", 00:23:03.295 "params": { 00:23:03.295 "small_pool_count": 8192, 00:23:03.295 "large_pool_count": 1024, 00:23:03.295 "small_bufsize": 8192, 00:23:03.295 "large_bufsize": 135168, 00:23:03.295 "enable_numa": false 00:23:03.295 } 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "sock", 00:23:03.295 "config": [ 00:23:03.295 { 00:23:03.295 "method": "sock_set_default_impl", 00:23:03.295 "params": { 00:23:03.295 "impl_name": "uring" 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "sock_impl_set_options", 00:23:03.295 "params": { 00:23:03.295 "impl_name": "ssl", 00:23:03.295 "recv_buf_size": 4096, 00:23:03.295 "send_buf_size": 4096, 00:23:03.295 "enable_recv_pipe": true, 00:23:03.295 "enable_quickack": false, 00:23:03.295 "enable_placement_id": 0, 00:23:03.295 "enable_zerocopy_send_server": true, 00:23:03.295 "enable_zerocopy_send_client": false, 00:23:03.295 "zerocopy_threshold": 0, 00:23:03.295 "tls_version": 0, 00:23:03.295 "enable_ktls": false 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "sock_impl_set_options", 00:23:03.295 "params": { 00:23:03.295 "impl_name": "posix", 00:23:03.295 "recv_buf_size": 2097152, 00:23:03.295 "send_buf_size": 2097152, 00:23:03.295 "enable_recv_pipe": true, 00:23:03.295 "enable_quickack": false, 00:23:03.295 "enable_placement_id": 0, 00:23:03.295 "enable_zerocopy_send_server": true, 00:23:03.295 "enable_zerocopy_send_client": false, 00:23:03.295 "zerocopy_threshold": 0, 00:23:03.295 "tls_version": 0, 00:23:03.295 "enable_ktls": false 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "sock_impl_set_options", 00:23:03.295 "params": { 00:23:03.295 "impl_name": "uring", 00:23:03.295 "recv_buf_size": 2097152, 00:23:03.295 "send_buf_size": 2097152, 00:23:03.295 "enable_recv_pipe": true, 00:23:03.295 "enable_quickack": false, 00:23:03.295 "enable_placement_id": 0, 00:23:03.295 "enable_zerocopy_send_server": false, 00:23:03.295 "enable_zerocopy_send_client": false, 00:23:03.295 "zerocopy_threshold": 0, 00:23:03.295 "tls_version": 0, 00:23:03.295 "enable_ktls": false 00:23:03.295 } 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "vmd", 00:23:03.295 "config": [] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "accel", 00:23:03.295 "config": [ 00:23:03.295 { 00:23:03.295 "method": "accel_set_options", 00:23:03.295 "params": { 00:23:03.295 "small_cache_size": 128, 00:23:03.295 "large_cache_size": 16, 00:23:03.295 "task_count": 2048, 00:23:03.295 "sequence_count": 2048, 00:23:03.295 "buf_count": 2048 00:23:03.295 } 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "bdev", 00:23:03.295 "config": [ 00:23:03.295 { 00:23:03.295 "method": "bdev_set_options", 00:23:03.295 "params": { 00:23:03.295 "bdev_io_pool_size": 65535, 00:23:03.295 "bdev_io_cache_size": 256, 00:23:03.295 "bdev_auto_examine": true, 00:23:03.295 "iobuf_small_cache_size": 128, 00:23:03.295 "iobuf_large_cache_size": 16 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_raid_set_options", 00:23:03.295 "params": { 00:23:03.295 "process_window_size_kb": 1024, 00:23:03.295 "process_max_bandwidth_mb_sec": 0 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_iscsi_set_options", 00:23:03.295 "params": { 00:23:03.295 "timeout_sec": 30 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_nvme_set_options", 00:23:03.295 "params": { 00:23:03.295 "action_on_timeout": "none", 00:23:03.295 "timeout_us": 0, 00:23:03.295 "timeout_admin_us": 0, 00:23:03.295 "keep_alive_timeout_ms": 10000, 00:23:03.295 "arbitration_burst": 0, 00:23:03.295 "low_priority_weight": 0, 00:23:03.295 "medium_priority_weight": 0, 00:23:03.295 "high_priority_weight": 0, 00:23:03.295 "nvme_adminq_poll_period_us": 10000, 00:23:03.295 "nvme_ioq_poll_period_us": 0, 00:23:03.295 "io_queue_requests": 512, 00:23:03.295 "delay_cmd_submit": true, 00:23:03.295 "transport_retry_count": 4, 00:23:03.295 "bdev_retry_count": 3, 00:23:03.295 "transport_ack_timeout": 0, 00:23:03.295 "ctrlr_loss_timeout_sec": 0, 00:23:03.295 "reconnect_delay_sec": 0, 00:23:03.295 "fast_io_fail_timeout_sec": 0, 00:23:03.295 "disable_auto_failback": false, 00:23:03.295 "generate_uuids": false, 00:23:03.295 "transport_tos": 0, 00:23:03.295 "nvme_error_stat": false, 00:23:03.295 "rdma_srq_size": 0, 00:23:03.295 "io_path_stat": false, 00:23:03.295 "allow_accel_sequence": false, 00:23:03.295 "rdma_max_cq_size": 0, 00:23:03.295 "rdma_cm_event_timeout_ms": 0, 00:23:03.295 "dhchap_digests": [ 00:23:03.295 "sha256", 00:23:03.295 "sha384", 00:23:03.295 "sha512" 00:23:03.295 ], 00:23:03.295 "dhchap_dhgroups": [ 00:23:03.295 "null", 00:23:03.295 "ffdhe2048", 00:23:03.295 "ffdhe3072", 00:23:03.295 "ffdhe4096", 00:23:03.295 "ffdhe6144", 00:23:03.295 "ffdhe8192" 00:23:03.295 ] 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_nvme_attach_controller", 00:23:03.295 "params": { 00:23:03.295 "name": "nvme0", 00:23:03.295 "trtype": "TCP", 00:23:03.295 "adrfam": "IPv4", 00:23:03.295 "traddr": "127.0.0.1", 00:23:03.295 "trsvcid": "4420", 00:23:03.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.295 "prchk_reftag": false, 00:23:03.295 "prchk_guard": false, 00:23:03.295 "ctrlr_loss_timeout_sec": 0, 00:23:03.295 "reconnect_delay_sec": 0, 00:23:03.295 "fast_io_fail_timeout_sec": 0, 00:23:03.295 "psk": "key0", 00:23:03.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.295 "hdgst": false, 00:23:03.295 "ddgst": false, 00:23:03.295 "multipath": "multipath" 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_nvme_set_hotplug", 00:23:03.295 "params": { 00:23:03.295 "period_us": 100000, 00:23:03.295 "enable": false 00:23:03.295 } 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "method": "bdev_wait_for_examine" 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }, 00:23:03.295 { 00:23:03.295 "subsystem": "nbd", 00:23:03.295 "config": [] 00:23:03.295 } 00:23:03.295 ] 00:23:03.295 }' 00:23:03.295 13:42:15 keyring_file -- keyring/file.sh@115 -- # killprocess 85831 00:23:03.295 13:42:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85831 ']' 00:23:03.295 13:42:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85831 00:23:03.295 13:42:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85831 00:23:03.296 killing process with pid 85831 00:23:03.296 Received shutdown signal, test time was about 1.000000 seconds 00:23:03.296 00:23:03.296 Latency(us) 00:23:03.296 [2024-11-20T13:42:15.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.296 [2024-11-20T13:42:15.253Z] =================================================================================================================== 00:23:03.296 [2024-11-20T13:42:15.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85831' 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@973 -- # kill 85831 00:23:03.296 13:42:15 keyring_file -- common/autotest_common.sh@978 -- # wait 85831 00:23:03.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.555 13:42:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=86085 00:23:03.555 13:42:15 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:03.555 13:42:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86085 /var/tmp/bperf.sock 00:23:03.555 13:42:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86085 ']' 00:23:03.555 13:42:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.555 13:42:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.555 13:42:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:23:03.555 "subsystems": [ 00:23:03.555 { 00:23:03.555 "subsystem": "keyring", 00:23:03.555 "config": [ 00:23:03.555 { 00:23:03.555 "method": "keyring_file_add_key", 00:23:03.555 "params": { 00:23:03.555 "name": "key0", 00:23:03.555 "path": "/tmp/tmp.nyAVuA9egZ" 00:23:03.555 } 00:23:03.555 }, 00:23:03.555 { 00:23:03.555 "method": "keyring_file_add_key", 00:23:03.555 "params": { 00:23:03.555 "name": "key1", 00:23:03.555 "path": "/tmp/tmp.0kiXn2EaZ5" 00:23:03.555 } 00:23:03.555 } 00:23:03.555 ] 00:23:03.555 }, 00:23:03.555 { 00:23:03.555 "subsystem": "iobuf", 00:23:03.555 "config": [ 00:23:03.555 { 00:23:03.555 "method": "iobuf_set_options", 00:23:03.555 "params": { 00:23:03.555 "small_pool_count": 8192, 00:23:03.555 "large_pool_count": 1024, 00:23:03.555 "small_bufsize": 8192, 00:23:03.555 "large_bufsize": 135168, 00:23:03.555 "enable_numa": false 00:23:03.555 } 00:23:03.555 } 00:23:03.555 ] 00:23:03.555 }, 00:23:03.555 { 00:23:03.555 "subsystem": "sock", 00:23:03.555 "config": [ 00:23:03.555 { 00:23:03.555 "method": "sock_set_default_impl", 00:23:03.555 "params": { 00:23:03.555 "impl_name": "uring" 00:23:03.555 } 00:23:03.555 }, 00:23:03.555 { 00:23:03.555 "method": "sock_impl_set_options", 00:23:03.555 "params": { 00:23:03.555 "impl_name": "ssl", 00:23:03.555 "recv_buf_size": 4096, 00:23:03.555 "send_buf_size": 4096, 00:23:03.555 "enable_recv_pipe": true, 00:23:03.555 "enable_quickack": false, 00:23:03.555 "enable_placement_id": 0, 00:23:03.555 "enable_zerocopy_send_server": true, 00:23:03.555 "enable_zerocopy_send_client": false, 00:23:03.555 "zerocopy_threshold": 0, 00:23:03.555 "tls_version": 0, 00:23:03.555 "enable_ktls": false 00:23:03.555 } 00:23:03.555 }, 00:23:03.555 { 00:23:03.555 "method": "sock_impl_set_options", 00:23:03.555 "params": { 00:23:03.555 "impl_name": "posix", 00:23:03.555 "recv_buf_size": 2097152, 00:23:03.555 "send_buf_size": 2097152, 00:23:03.555 "enable_recv_pipe": true, 00:23:03.555 "enable_quickack": false, 00:23:03.555 "enable_placement_id": 0, 00:23:03.555 "enable_zerocopy_send_server": true, 00:23:03.555 "enable_zerocopy_send_client": false, 00:23:03.555 "zerocopy_threshold": 0, 00:23:03.555 "tls_version": 0, 00:23:03.555 "enable_ktls": false 00:23:03.555 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "sock_impl_set_options", 00:23:03.556 "params": { 00:23:03.556 "impl_name": "uring", 00:23:03.556 "recv_buf_size": 2097152, 00:23:03.556 "send_buf_size": 2097152, 00:23:03.556 "enable_recv_pipe": true, 00:23:03.556 "enable_quickack": false, 00:23:03.556 "enable_placement_id": 0, 00:23:03.556 "enable_zerocopy_send_server": false, 00:23:03.556 "enable_zerocopy_send_client": false, 00:23:03.556 "zerocopy_threshold": 0, 00:23:03.556 "tls_version": 0, 00:23:03.556 "enable_ktls": false 00:23:03.556 } 00:23:03.556 } 00:23:03.556 ] 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "subsystem": "vmd", 00:23:03.556 "config": [] 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "subsystem": "accel", 00:23:03.556 "config": [ 00:23:03.556 { 00:23:03.556 "method": "accel_set_options", 00:23:03.556 "params": { 00:23:03.556 "small_cache_size": 128, 00:23:03.556 "large_cache_size": 16, 00:23:03.556 "task_count": 2048, 00:23:03.556 "sequence_count": 2048, 00:23:03.556 "buf_count": 2048 00:23:03.556 } 00:23:03.556 } 00:23:03.556 ] 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "subsystem": "bdev", 00:23:03.556 "config": [ 00:23:03.556 { 00:23:03.556 "method": "bdev_set_options", 00:23:03.556 "params": { 00:23:03.556 "bdev_io_pool_size": 65535, 00:23:03.556 "bdev_io_cache_size": 256, 00:23:03.556 "bdev_auto_examine": true, 00:23:03.556 "iobuf_small_cache_size": 128, 00:23:03.556 "iobuf_large_cache_size": 16 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_raid_set_options", 00:23:03.556 "params": { 00:23:03.556 "process_window_size_kb": 1024, 00:23:03.556 "process_max_bandwidth_mb_sec": 0 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_iscsi_set_options", 00:23:03.556 "params": { 00:23:03.556 "timeout_sec": 30 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_nvme_set_options", 00:23:03.556 "params": { 00:23:03.556 "action_on_timeout": "none", 00:23:03.556 "timeout_us": 0, 00:23:03.556 "timeout_admin_us": 0, 00:23:03.556 "keep_alive_timeout_ms": 10000, 00:23:03.556 "arbitration_burst": 0, 00:23:03.556 "low_priority_weight": 0, 00:23:03.556 "medium_priority_weight": 0, 00:23:03.556 "high_priority_weight": 0, 00:23:03.556 "nvme_adminq_poll_period_us": 10000, 00:23:03.556 "nvme_ioq_poll_period_us": 0, 00:23:03.556 "io_queue_requests": 512, 00:23:03.556 "delay_cmd_submit": true, 00:23:03.556 "transport_retry_count": 4, 00:23:03.556 "bdev_retry_count": 3, 00:23:03.556 "transport_ack_timeout": 0, 00:23:03.556 "ctrlr_loss_timeout_sec": 0, 00:23:03.556 "reconnect_delay_sec": 0, 00:23:03.556 "fast_io_fail_timeout_sec": 0, 00:23:03.556 "disable_auto_failback": false, 00:23:03.556 "generate_uuids": false, 00:23:03.556 "transport_tos": 0, 00:23:03.556 "nvme_error_stat": false, 00:23:03.556 "rdma_srq_size": 0, 00:23:03.556 "io_path_stat": false, 00:23:03.556 "allow_accel_sequence": false, 00:23:03.556 "rdma_max_cq_size": 0, 00:23:03.556 "rdma_cm_event_timeout_ms": 0, 00:23:03.556 "dhchap_digests": [ 00:23:03.556 "sha256", 00:23:03.556 "sha384", 00:23:03.556 "sha512" 00:23:03.556 ], 00:23:03.556 "dhchap_dhgroups": [ 00:23:03.556 "null", 00:23:03.556 "ffdhe2048", 00:23:03.556 "ffdhe3072", 00:23:03.556 "ffdhe4096", 00:23:03.556 "ffdhe6144", 00:23:03.556 "ffdhe8192" 00:23:03.556 ] 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_nvme_attach_controller", 00:23:03.556 "params": { 00:23:03.556 "name": "nvme0", 00:23:03.556 "trtype": "TCP", 00:23:03.556 "adrfam": "IPv4", 00:23:03.556 "traddr": "127.0.0.1", 00:23:03.556 "trsvcid": "4420", 00:23:03.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.556 "prchk_reftag": false, 00:23:03.556 "prchk_guard": false, 00:23:03.556 "ctrlr_loss_timeout_sec": 0, 00:23:03.556 "reconnect_delay_sec": 0, 00:23:03.556 "fast_io_fail_timeout_sec": 0, 00:23:03.556 "psk": "key0", 00:23:03.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.556 "hdgst": false, 00:23:03.556 "ddgst": false, 00:23:03.556 "multipath": "multipath" 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_nvme_set_hotplug", 00:23:03.556 "params": { 00:23:03.556 "period_us": 100000, 00:23:03.556 "enable": false 00:23:03.556 } 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "method": "bdev_wait_for_examine" 00:23:03.556 } 00:23:03.556 ] 00:23:03.556 }, 00:23:03.556 { 00:23:03.556 "subsystem": "nbd", 00:23:03.556 "config": [] 00:23:03.556 } 00:23:03.556 ] 00:23:03.556 }' 00:23:03.556 13:42:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.556 13:42:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.556 13:42:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:03.556 [2024-11-20 13:42:15.392594] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:03.556 [2024-11-20 13:42:15.392874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86085 ] 00:23:03.815 [2024-11-20 13:42:15.536027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.815 [2024-11-20 13:42:15.600101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.815 [2024-11-20 13:42:15.736837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:04.077 [2024-11-20 13:42:15.800265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.646 13:42:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.646 13:42:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:04.646 13:42:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:23:04.646 13:42:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.646 13:42:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:23:04.904 13:42:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:04.904 13:42:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:23:04.904 13:42:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:04.904 13:42:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:04.904 13:42:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:04.904 13:42:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:04.904 13:42:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.162 13:42:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:23:05.162 13:42:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:23:05.162 13:42:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:05.162 13:42:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.162 13:42:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.162 13:42:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:05.163 13:42:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.421 13:42:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:23:05.421 13:42:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:23:05.421 13:42:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:23:05.421 13:42:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:05.680 13:42:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:23:05.680 13:42:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:05.680 13:42:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nyAVuA9egZ /tmp/tmp.0kiXn2EaZ5 00:23:05.939 13:42:17 keyring_file -- keyring/file.sh@20 -- # killprocess 86085 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86085 ']' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86085 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86085 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.939 killing process with pid 86085 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86085' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@973 -- # kill 86085 00:23:05.939 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.939 00:23:05.939 Latency(us) 00:23:05.939 [2024-11-20T13:42:17.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.939 [2024-11-20T13:42:17.896Z] =================================================================================================================== 00:23:05.939 [2024-11-20T13:42:17.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@978 -- # wait 86085 00:23:05.939 13:42:17 keyring_file -- keyring/file.sh@21 -- # killprocess 85821 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85821 ']' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85821 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.939 13:42:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85821 00:23:06.198 13:42:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.198 killing process with pid 85821 00:23:06.198 13:42:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.198 13:42:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85821' 00:23:06.198 13:42:17 keyring_file -- common/autotest_common.sh@973 -- # kill 85821 00:23:06.198 13:42:17 keyring_file -- common/autotest_common.sh@978 -- # wait 85821 00:23:06.456 00:23:06.456 real 0m16.528s 00:23:06.456 user 0m42.100s 00:23:06.456 sys 0m3.227s 00:23:06.456 13:42:18 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.456 ************************************ 00:23:06.456 END TEST keyring_file 00:23:06.456 ************************************ 00:23:06.456 13:42:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:06.456 13:42:18 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:23:06.456 13:42:18 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:06.456 13:42:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.456 13:42:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.456 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:23:06.456 ************************************ 00:23:06.456 START TEST keyring_linux 00:23:06.456 ************************************ 00:23:06.456 13:42:18 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:06.456 Joined session keyring: 293545420 00:23:06.771 * Looking for test storage... 00:23:06.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:06.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.771 --rc genhtml_branch_coverage=1 00:23:06.771 --rc genhtml_function_coverage=1 00:23:06.771 --rc genhtml_legend=1 00:23:06.771 --rc geninfo_all_blocks=1 00:23:06.771 --rc geninfo_unexecuted_blocks=1 00:23:06.771 00:23:06.771 ' 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:06.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.771 --rc genhtml_branch_coverage=1 00:23:06.771 --rc genhtml_function_coverage=1 00:23:06.771 --rc genhtml_legend=1 00:23:06.771 --rc geninfo_all_blocks=1 00:23:06.771 --rc geninfo_unexecuted_blocks=1 00:23:06.771 00:23:06.771 ' 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:06.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.771 --rc genhtml_branch_coverage=1 00:23:06.771 --rc genhtml_function_coverage=1 00:23:06.771 --rc genhtml_legend=1 00:23:06.771 --rc geninfo_all_blocks=1 00:23:06.771 --rc geninfo_unexecuted_blocks=1 00:23:06.771 00:23:06.771 ' 00:23:06.771 13:42:18 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:06.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.771 --rc genhtml_branch_coverage=1 00:23:06.771 --rc genhtml_function_coverage=1 00:23:06.771 --rc genhtml_legend=1 00:23:06.771 --rc geninfo_all_blocks=1 00:23:06.771 --rc geninfo_unexecuted_blocks=1 00:23:06.771 00:23:06.771 ' 00:23:06.771 13:42:18 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:06.771 13:42:18 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ff08136-65da-4f4c-b769-a07096c587b5 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8ff08136-65da-4f4c-b769-a07096c587b5 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.771 13:42:18 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.771 13:42:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.772 13:42:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.772 13:42:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.772 13:42:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.772 13:42:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.772 13:42:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.772 13:42:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:06.772 13:42:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.772 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:06.772 /tmp/:spdk-test:key0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:06.772 13:42:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:06.772 /tmp/:spdk-test:key1 00:23:06.772 13:42:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86212 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.772 13:42:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86212 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86212 ']' 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.772 13:42:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:07.030 [2024-11-20 13:42:18.779929] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:07.030 [2024-11-20 13:42:18.780068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86212 ] 00:23:07.030 [2024-11-20 13:42:18.931259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.289 [2024-11-20 13:42:19.010121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.289 [2024-11-20 13:42:19.093246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:07.546 [2024-11-20 13:42:19.312695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.546 null0 00:23:07.546 [2024-11-20 13:42:19.344651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.546 [2024-11-20 13:42:19.344919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:07.546 714564781 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:07.546 905481182 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86223 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:07.546 13:42:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86223 /var/tmp/bperf.sock 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86223 ']' 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.546 13:42:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:07.546 [2024-11-20 13:42:19.432897] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:07.546 [2024-11-20 13:42:19.433025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86223 ] 00:23:07.805 [2024-11-20 13:42:19.581467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.805 [2024-11-20 13:42:19.652931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.805 13:42:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.805 13:42:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:07.805 13:42:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:07.805 13:42:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:08.371 13:42:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:08.371 13:42:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:08.630 [2024-11-20 13:42:20.433556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.630 13:42:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:08.630 13:42:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:08.889 [2024-11-20 13:42:20.799347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.147 nvme0n1 00:23:09.147 13:42:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:09.147 13:42:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:09.147 13:42:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:09.147 13:42:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:09.147 13:42:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.147 13:42:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:09.406 13:42:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:09.406 13:42:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:09.406 13:42:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:09.406 13:42:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.406 13:42:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.406 13:42:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:09.406 13:42:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@25 -- # sn=714564781 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 714564781 == \7\1\4\5\6\4\7\8\1 ]] 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 714564781 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:09.665 13:42:21 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:09.923 Running I/O for 1 seconds... 00:23:10.860 12157.00 IOPS, 47.49 MiB/s 00:23:10.860 Latency(us) 00:23:10.860 [2024-11-20T13:42:22.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:10.860 nvme0n1 : 1.01 12154.36 47.48 0.00 0.00 10469.46 5540.77 15847.80 00:23:10.860 [2024-11-20T13:42:22.817Z] =================================================================================================================== 00:23:10.860 [2024-11-20T13:42:22.817Z] Total : 12154.36 47.48 0.00 0.00 10469.46 5540.77 15847.80 00:23:10.860 { 00:23:10.860 "results": [ 00:23:10.860 { 00:23:10.860 "job": "nvme0n1", 00:23:10.860 "core_mask": "0x2", 00:23:10.860 "workload": "randread", 00:23:10.860 "status": "finished", 00:23:10.860 "queue_depth": 128, 00:23:10.860 "io_size": 4096, 00:23:10.860 "runtime": 1.010831, 00:23:10.860 "iops": 12154.356168340702, 00:23:10.860 "mibps": 47.47795378258087, 00:23:10.860 "io_failed": 0, 00:23:10.860 "io_timeout": 0, 00:23:10.860 "avg_latency_us": 10469.456740414069, 00:23:10.860 "min_latency_us": 5540.770909090909, 00:23:10.860 "max_latency_us": 15847.796363636364 00:23:10.860 } 00:23:10.860 ], 00:23:10.860 "core_count": 1 00:23:10.860 } 00:23:10.860 13:42:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:10.860 13:42:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:11.427 13:42:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:11.427 13:42:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:11.427 13:42:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:11.427 13:42:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:11.427 13:42:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:11.427 13:42:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.686 13:42:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:11.686 13:42:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:11.686 13:42:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:11.686 13:42:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:11.686 13:42:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:11.686 13:42:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:11.945 [2024-11-20 13:42:23.730811] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.945 [2024-11-20 13:42:23.731423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f75d0 (107): Transport endpoint is not connected 00:23:11.945 [2024-11-20 13:42:23.732408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f75d0 (9): Bad file descriptor 00:23:11.945 [2024-11-20 13:42:23.733403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:11.945 [2024-11-20 13:42:23.733569] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:11.945 [2024-11-20 13:42:23.733586] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:11.945 [2024-11-20 13:42:23.733598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:11.945 request: 00:23:11.945 { 00:23:11.945 "name": "nvme0", 00:23:11.945 "trtype": "tcp", 00:23:11.945 "traddr": "127.0.0.1", 00:23:11.945 "adrfam": "ipv4", 00:23:11.945 "trsvcid": "4420", 00:23:11.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:11.945 "prchk_reftag": false, 00:23:11.945 "prchk_guard": false, 00:23:11.945 "hdgst": false, 00:23:11.945 "ddgst": false, 00:23:11.945 "psk": ":spdk-test:key1", 00:23:11.945 "allow_unrecognized_csi": false, 00:23:11.945 "method": "bdev_nvme_attach_controller", 00:23:11.945 "req_id": 1 00:23:11.945 } 00:23:11.945 Got JSON-RPC error response 00:23:11.945 response: 00:23:11.945 { 00:23:11.945 "code": -5, 00:23:11.945 "message": "Input/output error" 00:23:11.945 } 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@33 -- # sn=714564781 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 714564781 00:23:11.945 1 links removed 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@33 -- # sn=905481182 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 905481182 00:23:11.945 1 links removed 00:23:11.945 13:42:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86223 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86223 ']' 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86223 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86223 00:23:11.945 killing process with pid 86223 00:23:11.945 Received shutdown signal, test time was about 1.000000 seconds 00:23:11.945 00:23:11.945 Latency(us) 00:23:11.945 [2024-11-20T13:42:23.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.945 [2024-11-20T13:42:23.902Z] =================================================================================================================== 00:23:11.945 [2024-11-20T13:42:23.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86223' 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 86223 00:23:11.945 13:42:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 86223 00:23:12.204 13:42:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86212 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86212 ']' 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86212 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86212 00:23:12.204 killing process with pid 86212 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86212' 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 86212 00:23:12.204 13:42:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 86212 00:23:12.771 ************************************ 00:23:12.771 END TEST keyring_linux 00:23:12.771 ************************************ 00:23:12.771 00:23:12.771 real 0m6.079s 00:23:12.771 user 0m12.350s 00:23:12.771 sys 0m1.664s 00:23:12.771 13:42:24 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.771 13:42:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:12.771 13:42:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:12.771 13:42:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:12.771 13:42:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:12.772 13:42:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:12.772 13:42:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:12.772 13:42:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:12.772 13:42:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:12.772 13:42:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.772 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:23:12.772 13:42:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:12.772 13:42:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:12.772 13:42:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:12.772 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:23:14.682 INFO: APP EXITING 00:23:14.682 INFO: killing all VMs 00:23:14.682 INFO: killing vhost app 00:23:14.682 INFO: EXIT DONE 00:23:15.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:15.271 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:15.271 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:16.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:16.207 Cleaning 00:23:16.207 Removing: /var/run/dpdk/spdk0/config 00:23:16.207 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:16.207 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:16.207 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:16.207 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:16.207 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:16.207 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:16.207 Removing: /var/run/dpdk/spdk1/config 00:23:16.207 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:16.207 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:16.207 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:16.207 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:16.207 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:16.207 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:16.207 Removing: /var/run/dpdk/spdk2/config 00:23:16.207 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:16.207 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:16.207 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:16.207 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:16.207 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:16.207 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:16.207 Removing: /var/run/dpdk/spdk3/config 00:23:16.207 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:16.207 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:16.207 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:16.207 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:16.207 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:16.207 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:16.207 Removing: /var/run/dpdk/spdk4/config 00:23:16.207 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:16.207 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:16.207 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:16.207 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:16.207 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:16.207 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:16.207 Removing: /dev/shm/nvmf_trace.0 00:23:16.207 Removing: /dev/shm/spdk_tgt_trace.pid56851 00:23:16.207 Removing: /var/run/dpdk/spdk0 00:23:16.208 Removing: /var/run/dpdk/spdk1 00:23:16.208 Removing: /var/run/dpdk/spdk2 00:23:16.208 Removing: /var/run/dpdk/spdk3 00:23:16.208 Removing: /var/run/dpdk/spdk4 00:23:16.208 Removing: /var/run/dpdk/spdk_pid56692 00:23:16.208 Removing: /var/run/dpdk/spdk_pid56851 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57049 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57130 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57163 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57273 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57291 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57430 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57626 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57780 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57858 00:23:16.208 Removing: /var/run/dpdk/spdk_pid57929 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58026 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58098 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58136 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58172 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58236 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58328 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58772 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58811 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58862 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58876 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58943 00:23:16.208 Removing: /var/run/dpdk/spdk_pid58952 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59019 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59035 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59080 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59091 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59136 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59154 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59285 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59320 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59403 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59735 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59747 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59783 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59797 00:23:16.208 Removing: /var/run/dpdk/spdk_pid59818 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59837 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59856 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59866 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59890 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59904 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59925 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59944 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59963 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59973 00:23:16.466 Removing: /var/run/dpdk/spdk_pid59992 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60011 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60032 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60051 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60059 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60080 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60116 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60126 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60161 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60233 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60262 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60271 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60305 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60309 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60323 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60360 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60379 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60408 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60417 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60427 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60438 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60453 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60457 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60474 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60484 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60512 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60539 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60548 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60582 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60592 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60599 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60640 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60651 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60678 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60691 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60697 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60706 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60713 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60721 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60728 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60736 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60818 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60871 00:23:16.466 Removing: /var/run/dpdk/spdk_pid60989 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61028 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61073 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61093 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61104 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61124 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61161 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61182 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61260 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61276 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61320 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61389 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61450 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61475 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61582 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61623 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61657 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61888 00:23:16.466 Removing: /var/run/dpdk/spdk_pid61981 00:23:16.466 Removing: /var/run/dpdk/spdk_pid62015 00:23:16.466 Removing: /var/run/dpdk/spdk_pid62039 00:23:16.466 Removing: /var/run/dpdk/spdk_pid62078 00:23:16.466 Removing: /var/run/dpdk/spdk_pid62106 00:23:16.467 Removing: /var/run/dpdk/spdk_pid62145 00:23:16.467 Removing: /var/run/dpdk/spdk_pid62183 00:23:16.467 Removing: /var/run/dpdk/spdk_pid62570 00:23:16.467 Removing: /var/run/dpdk/spdk_pid62608 00:23:16.467 Removing: /var/run/dpdk/spdk_pid62959 00:23:16.467 Removing: /var/run/dpdk/spdk_pid63435 00:23:16.467 Removing: /var/run/dpdk/spdk_pid63706 00:23:16.467 Removing: /var/run/dpdk/spdk_pid64630 00:23:16.467 Removing: /var/run/dpdk/spdk_pid65600 00:23:16.467 Removing: /var/run/dpdk/spdk_pid65718 00:23:16.467 Removing: /var/run/dpdk/spdk_pid65785 00:23:16.467 Removing: /var/run/dpdk/spdk_pid67204 00:23:16.467 Removing: /var/run/dpdk/spdk_pid67507 00:23:16.467 Removing: /var/run/dpdk/spdk_pid71381 00:23:16.467 Removing: /var/run/dpdk/spdk_pid71740 00:23:16.467 Removing: /var/run/dpdk/spdk_pid71850 00:23:16.467 Removing: /var/run/dpdk/spdk_pid71985 00:23:16.467 Removing: /var/run/dpdk/spdk_pid72006 00:23:16.467 Removing: /var/run/dpdk/spdk_pid72040 00:23:16.467 Removing: /var/run/dpdk/spdk_pid72061 00:23:16.467 Removing: /var/run/dpdk/spdk_pid72159 00:23:16.467 Removing: /var/run/dpdk/spdk_pid72301 00:23:16.765 Removing: /var/run/dpdk/spdk_pid72451 00:23:16.765 Removing: /var/run/dpdk/spdk_pid72537 00:23:16.765 Removing: /var/run/dpdk/spdk_pid72734 00:23:16.765 Removing: /var/run/dpdk/spdk_pid72809 00:23:16.765 Removing: /var/run/dpdk/spdk_pid72895 00:23:16.765 Removing: /var/run/dpdk/spdk_pid73256 00:23:16.765 Removing: /var/run/dpdk/spdk_pid73684 00:23:16.765 Removing: /var/run/dpdk/spdk_pid73685 00:23:16.765 Removing: /var/run/dpdk/spdk_pid73686 00:23:16.765 Removing: /var/run/dpdk/spdk_pid73950 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74215 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74598 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74611 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74926 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74947 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74961 00:23:16.765 Removing: /var/run/dpdk/spdk_pid74992 00:23:16.765 Removing: /var/run/dpdk/spdk_pid75001 00:23:16.765 Removing: /var/run/dpdk/spdk_pid75356 00:23:16.765 Removing: /var/run/dpdk/spdk_pid75406 00:23:16.765 Removing: /var/run/dpdk/spdk_pid75730 00:23:16.765 Removing: /var/run/dpdk/spdk_pid75929 00:23:16.765 Removing: /var/run/dpdk/spdk_pid76379 00:23:16.765 Removing: /var/run/dpdk/spdk_pid76929 00:23:16.765 Removing: /var/run/dpdk/spdk_pid77827 00:23:16.765 Removing: /var/run/dpdk/spdk_pid78462 00:23:16.765 Removing: /var/run/dpdk/spdk_pid78464 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80518 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80584 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80644 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80698 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80806 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80866 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80919 00:23:16.765 Removing: /var/run/dpdk/spdk_pid80972 00:23:16.766 Removing: /var/run/dpdk/spdk_pid81346 00:23:16.766 Removing: /var/run/dpdk/spdk_pid82550 00:23:16.766 Removing: /var/run/dpdk/spdk_pid82702 00:23:16.766 Removing: /var/run/dpdk/spdk_pid82940 00:23:16.766 Removing: /var/run/dpdk/spdk_pid83530 00:23:16.766 Removing: /var/run/dpdk/spdk_pid83684 00:23:16.766 Removing: /var/run/dpdk/spdk_pid83847 00:23:16.766 Removing: /var/run/dpdk/spdk_pid83940 00:23:16.766 Removing: /var/run/dpdk/spdk_pid84109 00:23:16.766 Removing: /var/run/dpdk/spdk_pid84231 00:23:16.766 Removing: /var/run/dpdk/spdk_pid84939 00:23:16.766 Removing: /var/run/dpdk/spdk_pid84980 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85014 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85270 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85301 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85331 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85821 00:23:16.766 Removing: /var/run/dpdk/spdk_pid85831 00:23:16.766 Removing: /var/run/dpdk/spdk_pid86085 00:23:16.766 Removing: /var/run/dpdk/spdk_pid86212 00:23:16.766 Removing: /var/run/dpdk/spdk_pid86223 00:23:16.766 Clean 00:23:16.766 13:42:28 -- common/autotest_common.sh@1453 -- # return 0 00:23:16.766 13:42:28 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:16.766 13:42:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.766 13:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.766 13:42:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:16.766 13:42:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.766 13:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.766 13:42:28 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:17.054 13:42:28 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:17.054 13:42:28 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:17.054 13:42:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:17.054 13:42:28 -- spdk/autotest.sh@398 -- # hostname 00:23:17.055 13:42:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:17.055 geninfo: WARNING: invalid characters removed from testname! 00:23:49.154 13:42:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:50.528 13:43:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:53.835 13:43:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:56.368 13:43:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:59.654 13:43:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:02.183 13:43:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:04.711 13:43:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:04.711 13:43:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:04.711 13:43:16 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:04.711 13:43:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:04.711 13:43:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:04.711 13:43:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:04.711 + [[ -n 5366 ]] 00:24:04.711 + sudo kill 5366 00:24:04.719 [Pipeline] } 00:24:04.737 [Pipeline] // timeout 00:24:04.742 [Pipeline] } 00:24:04.758 [Pipeline] // stage 00:24:04.764 [Pipeline] } 00:24:04.777 [Pipeline] // catchError 00:24:04.787 [Pipeline] stage 00:24:04.789 [Pipeline] { (Stop VM) 00:24:04.802 [Pipeline] sh 00:24:05.085 + vagrant halt 00:24:09.272 ==> default: Halting domain... 00:24:15.853 [Pipeline] sh 00:24:16.136 + vagrant destroy -f 00:24:20.322 ==> default: Removing domain... 00:24:20.335 [Pipeline] sh 00:24:20.618 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:20.628 [Pipeline] } 00:24:20.644 [Pipeline] // stage 00:24:20.652 [Pipeline] } 00:24:20.668 [Pipeline] // dir 00:24:20.674 [Pipeline] } 00:24:20.688 [Pipeline] // wrap 00:24:20.693 [Pipeline] } 00:24:20.706 [Pipeline] // catchError 00:24:20.716 [Pipeline] stage 00:24:20.718 [Pipeline] { (Epilogue) 00:24:20.730 [Pipeline] sh 00:24:21.010 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:27.724 [Pipeline] catchError 00:24:27.726 [Pipeline] { 00:24:27.740 [Pipeline] sh 00:24:28.021 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:28.021 Artifacts sizes are good 00:24:28.030 [Pipeline] } 00:24:28.045 [Pipeline] // catchError 00:24:28.057 [Pipeline] archiveArtifacts 00:24:28.065 Archiving artifacts 00:24:28.190 [Pipeline] cleanWs 00:24:28.201 [WS-CLEANUP] Deleting project workspace... 00:24:28.201 [WS-CLEANUP] Deferred wipeout is used... 00:24:28.207 [WS-CLEANUP] done 00:24:28.209 [Pipeline] } 00:24:28.224 [Pipeline] // stage 00:24:28.229 [Pipeline] } 00:24:28.244 [Pipeline] // node 00:24:28.249 [Pipeline] End of Pipeline 00:24:28.286 Finished: SUCCESS